作者
Axel Cleeremans, David Servan-Schreiber, James L McClelland
发表日期
1989/9/1
期刊
Neural computation
卷号
1
期号
3
页码范围
372-381
出版商
MIT Press
简介
We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step t−1, together with element t, to predict element t + 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. When the network has a minimal number of hidden units, patterns on the hidden units come to correspond to the nodes of the grammar, although this correspondence is not necessary for the network to act as a perfect finite-state recognizer. We explore the conditions under which the network can carry information about distant sequential contingencies across intervening elements. Such information is maintained with relative ease if it is relevant at each intermediate step; it tends to be lost when intervening elements do not …
引用总数
19901991199219931994199519961997199819992000200120022003200420052006200720082009201020112012201320142015201620172018201920202021202220232024101825313735262523352732202114191214161414101218168191119193733311812
学术搜索中的文章
A Cleeremans, D Servan-Schreiber, JL McClelland - Neural computation, 1989