Back-propagation algorithm with variable adaptive momentum

AA Hameed, B Karlik, MS Salman - Knowledge-Based Systems, 2016 - Elsevier
In this paper, we propose a novel machine learning classifier by deriving a new adaptive
momentum back-propagation (BP) artificial neural networks algorithm. The proposed …

Convergence analysis of online gradient method for BP neural networks

W Wu, J Wang, M Cheng, Z Li - Neural Networks, 2011 - Elsevier
This paper considers a class of online gradient learning methods for backpropagation (BP)
neural networks with a single hidden layer. We assume that in each training cycle, each …

Deterministic convergence of an online gradient method for BP neural networks

W Wu, G Feng, Z Li, Y Xu - IEEE transactions on neural …, 2005 - ieeexplore.ieee.org
Online gradient methods are widely used for training feedforward neural networks. We prove
in this paper a convergence theorem for an online gradient method with variable step size …

Global convergence of online BP training with dynamic learning rate

R Zhang, ZB Xu, GB Huang… - IEEE Transactions on …, 2012 - ieeexplore.ieee.org
The online backpropagation (BP) training procedure has been extensively explored in
scientific research and engineering applications. One of the main factors affecting the …

Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions

Q Fan, Q Kang, JM Zurada, I Life Fellow - Information Sciences, 2022 - Elsevier
This work proves the deterministic convergence of the Sigma-Pi-Sigma neural network
based on the batch gradient learning algorithm under certain relaxed conditions. We …

Convergence of online gradient method for feedforward neural networks with smoothing L1/2 regularization penalty

Q Fan, JM Zurada, W Wu - Neurocomputing, 2014 - Elsevier
Minimization of the training regularization term has been recognized as an important
objective for sparse modeling and generalization in feedforward neural networks. Most of …

Convergence of gradient method with momentum for two-layer feedforward neural networks

N Zhang, W Wu, G Zheng - IEEE Transactions on Neural …, 2006 - ieeexplore.ieee.org
A gradient method with momentum for two-layer feedforward neural networks is considered.
The learning rate is set to be a constant and the momentum factor an adaptive variable. Both …

Boundedness and convergence of online gradient method with penalty for feedforward neural networks

H Zhang, W Wu, F Liu, M Yao - IEEE transactions on neural …, 2009 - ieeexplore.ieee.org
In this brief, we consider an online gradient method with penalty for training feedforward
neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its …

An online gradient method with momentum for two-layer feedforward neural networks

N Zhang - Applied Mathematics and Computation, 2009 - Elsevier
An online gradient method with momentum for two-layer feedforward neural networks is
considered. The momentum coefficient is chosen in an adaptive manner to accelerate and …

Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks

J Wang, J Yang, W Wu - IEEE Transactions on Neural …, 2011 - ieeexplore.ieee.org
Two backpropagation algorithms with momentum for feedforward neural networks with a
single hidden layer are considered. It is assumed that the training samples are supplied to …