[HTML][HTML] Spatio-temporal backpropagation for training high-performance spiking neural networks

Y Wu, L Deng, G Li, L Shi - Frontiers in neuroscience, 2018 - frontiersin.org
Y Wu, L Deng, G Li, L Shi
Frontiers in neuroscience, 2018frontiersin.org
Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since
spikes are capable of encoding spatio-temporal information. Recent schemes, eg, pre-
training from artificial neural networks (ANNs) or direct training based on backpropagation
(BP), make the high-performance supervised training of SNNs possible. However, these
methods primarily fasten more attention on its spatial domain information, and the dynamics
in temporal domain are attached less significance. Consequently, this might lead to the …
Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since spikes are capable of encoding spatio-temporal information. Recent schemes, e.g., pre-training from artificial neural networks (ANNs) or direct training based on backpropagation (BP), make the high-performance supervised training of SNNs possible. However, these methods primarily fasten more attention on its spatial domain information, and the dynamics in temporal domain are attached less significance. Consequently, this might lead to the performance bottleneck, and scores of training techniques shall be additionally required. Another underlying problem is that the spike activity is naturally non-differentiable, raising more difficulties in supervised training of SNNs. In this paper, we propose a spatio-temporal backpropagation (STBP) algorithm for training high-performance SNNs. In order to solve the non-differentiable problem of SNNs, an approximated derivative for spike activity is proposed, being appropriate for gradient descent training. The STBP algorithm combines the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD), and does not require any additional complicated skill. We evaluate this method through adopting both the fully connected and convolutional architecture on the static MNIST dataset, a custom object detection dataset, and the dynamic N-MNIST dataset. Results bespeak that our approach achieves the best accuracy compared with existing state-of-the-art algorithms on spiking networks. This work provides a new perspective to investigate the high-performance SNNs for future brain-like computing paradigm with rich spatio-temporal dynamics.
Frontiers
以上显示的是最相近的搜索结果。 查看全部搜索结果