作者
C Lee Giles, Christian W Omlin
发表日期
1993/3/28
研讨会论文
IEEE International Conference on Neural Networks
页码范围
801-806
出版商
IEEE
简介
Recurrent neural networks can be trained to behave like deterministic finite-state automata (DFAs) and methods have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge of a subset of the DFA state transitions into recurrent neural networks, it is shown that recurrent neural networks are able to perform rule refinement. The results from training a recurrent neural network to recognize a known nontrivial randomly generated regular grammar show that not only do the networks preserve correct prior knowledge, but they are able to correct through training inserted prior knowledge which was wrong. By wrong, it is meant that the inserted rules were not the ones in the randomly generated grammar.< >
引用总数
1993199419951996199719981999200020012002200320042005200620072008200920102011201220132014201520162017201820192020202120222023135462116213113112211
学术搜索中的文章
CL Giles, CW Omlin - IEEE International Conference on Neural Networks, 1993