Legendre memory units: Continuous-time representation in recurrent neural networks

A Voelker, I Kajić, C Eliasmith - Advances in neural …, 2019 - proceedings.neurips.cc
Advances in neural information processing systems, 2019proceedings.neurips.cc
We propose a novel memory cell for recurrent neural networks that dynamically maintains
information across long windows of time using relatively few resources. The Legendre
Memory Unit~(LMU) is mathematically derived to orthogonalize its continuous-time history--
doing so by solving $ d $ coupled ordinary differential equations~(ODEs), whose phase
space linearly maps onto sliding windows of time via the Legendre polynomials up to
degree $ d-1$. Backpropagation across LMUs outperforms equivalently-sized LSTMs on a …
Abstract
We propose a novel memory cell for recurrent neural networks that dynamically maintains information across long windows of time using relatively few resources. The Legendre Memory Unit~(LMU) is mathematically derived to orthogonalize its continuous-time history--doing so by solving coupled ordinary differential equations~(ODEs), whose phase space linearly maps onto sliding windows of time via the Legendre polynomials up to degree . Backpropagation across LMUs outperforms equivalently-sized LSTMs on a chaotic time-series prediction task, improves memory capacity by two orders of magnitude, and significantly reduces training and inference times. LMUs can efficiently handle temporal dependencies spanning time-steps, converge rapidly, and use few internal state-variables to learn complex functions spanning long windows of time--exceeding state-of-the-art performance among RNNs on permuted sequential MNIST. These results are due to the network's disposition to learn scale-invariant features independently of step size. Backpropagation through the ODE solver allows each layer to adapt its internal time-step, enabling the network to learn task-relevant time-scales. We demonstrate that LMU memory cells can be implemented using recurrently-connected Poisson spiking neurons, time and memory, with error scaling as . We discuss implementations of LMUs on analog and digital neuromorphic hardware.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果