The problem of learning long-term dependencies in recurrent networks

Y Bengio, P Frasconi, P Simard - IEEE international conference …, 1993 - ieeexplore.ieee.org
IEEE international conference on neural networks, 1993ieeexplore.ieee.org
The authors seek to train recurrent neural networks in order to map input sequences to
output sequences, for applications in sequence recognition or production. Results are
presented showing that learning long-term dependencies in such recurrent networks using
gradient descent is a very difficult task. It is shown how this difficulty arises when robustly
latching bits of information with certain attractors. The derivatives of the output at time t with
respect to the unit activations at time zero tend rapidly to zero as t increases for most input …
The authors seek to train recurrent neural networks in order to map input sequences to output sequences, for applications in sequence recognition or production. Results are presented showing that learning long-term dependencies in such recurrent networks using gradient descent is a very difficult task. It is shown how this difficulty arises when robustly latching bits of information with certain attractors. The derivatives of the output at time t with respect to the unit activations at time zero tend rapidly to zero as t increases for most input values. In such a situation, simple gradient descent techniques appear inappropriate. The consideration of alternative optimization methods and architectures is suggested.< >
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果