Non-iterative online sequential learning strategy for autoencoder and classifier

AN Paul, P Yan, Y Yang, H Zhang, S Du… - Neural Computing and …, 2021 - Springer
Neural Computing and Applications, 2021Springer
Artificial neural network training algorithms aim to optimize the network parameters
regarding the pre-defined cost function. Gradient-based artificial neural network training
algorithms support iterative learning and have gained immense popularity for training
different artificial neural networks end-to-end. However, training through gradient methods is
time-consuming. Another family of training algorithms is based on the Moore–Penrose
inverse, which is much faster than many other gradient methods. Nevertheless, most of …
Abstract
Artificial neural network training algorithms aim to optimize the network parameters regarding the pre-defined cost function. Gradient-based artificial neural network training algorithms support iterative learning and have gained immense popularity for training different artificial neural networks end-to-end. However, training through gradient methods is time-consuming. Another family of training algorithms is based on the Moore–Penrose inverse, which is much faster than many other gradient methods. Nevertheless, most of those algorithms are non-iterative and thus do not support mini-batch learning in nature. This work extends two non-iterative Moore–Penrose inverse-based training algorithms to enable online sequential learning: a single-hidden-layer autoencoder training algorithm and a sub-network-based classifier training algorithm. We further present an approach that uses the proposed autoencoder for self-supervised dimension reduction and then uses the proposed classifier for supervised classification. The experimental results show that the proposed approach achieves satisfactory classification accuracy on many benchmark datasets with extremely low time consumption (up to 50 times faster than the support vector machine on CIFAR 10 dataset).
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果