Formal convergence analysis on deterministic ℓ1-regularization based mini-batch learning for RBF networks

Z Liu, CS Leung, HC So - Neurocomputing, 2023 - Elsevier
Conventional convergence analysis on mini-batch learning is usually based on the
stochastic gradient concept, in which we assume that the training data are presented in a …

Convergence of mini-batch learning for fault aware rbf networks

E Cha, CS Leung, E Wong - … 2020, Bangkok, Thailand, November 18–22 …, 2020 - Springer
In between online and batch modes, there is a mini-batch concept that takes a subset of the
training samples for updating the weights at each iteration. Traditional analysis of mini-batch …

Deterministic convergence of complex mini-batch gradient learning algorithm for fully complex-valued neural networks

H Zhang, Y Zhang, S Zhu, D Xu - Neurocomputing, 2020 - Elsevier
This paper investigates the fully complex mini-batch gradient algorithm for training complex-
valued neural networks. Mini-batch gradient method has been widely used in neural …

[PDF][PDF] Efficient sequential and batch learning artificial neural network methods for classification problems

Z Runxuan - Singapore, 2005 - researchgate.net
This thesis focuses on the development and applications of efficient sequential and batch
learning artificial neural network methods for classification problems, with emphasis on bio …

Convergence analysis on the deterministic mini-batch learning algorithm for noise resilient radial basis function networks

HT Wong, CS Leung, S Kwong - International Journal of Machine …, 2022 - Springer
This paper gives a formal convergence analysis on the mini-batch training algorithm for
noise resilient radial basis function (RBF) networks. Unlike the conventional analysis which …

A learning algorithm with a gradient normalization and a learning rate adaptation for the mini-batch type learning

D Ito, T Okamoto, S Koakutsu - … of the Society of Instrument and …, 2017 - ieeexplore.ieee.org
The development of a high-performance optimization algorithm to solve the learning
problem of the neural networks is strongly demanded with the advance of the deep learning …

Study on the large batch size training of neural networks based on the second order gradient

F Gao, H Zhong - arXiv preprint arXiv:2012.08795, 2020 - arxiv.org
Large batch size training in deep neural networks (DNNs) possesses a well-
known'generalization gap'that remarkably induces generalization performance degradation …

[PDF][PDF] Effect of batch learning in multilayer neural networks

K Fukumizu - Gen, 1998 - researchgate.net
This paper discusses batch gradient descent learning in multilayer networks with a large
number of statistical training data. We emphasize on the difference between regular cases …

Discrete error dynamics of mini-batch gradient descent for least squares regression

J Lok, R Sonthalia, E Rebrova - arXiv preprint arXiv:2406.03696, 2024 - arxiv.org
We study the discrete dynamics of mini-batch gradient descent for least squares regression
when sampling without replacement. We show that the dynamics and generalization error of …

Mini-batch algorithms with Barzilai–Borwein update step

Z Yang, C Wang, Y Zang, J Li - Neurocomputing, 2018 - Elsevier
As a way to accelerate stochastic schemes, mini-batch optimization has been a popular
choice for large scale learning due to its good general performance and ease of parallel …