Dynamic learning rate optimization of the backpropagation algorithm

XH Yu, GA Chen, SX Cheng - IEEE Transactions on Neural …, 1995 - ieeexplore.ieee.org
XH Yu, GA Chen, SX Cheng
IEEE Transactions on Neural Networks, 1995ieeexplore.ieee.org
It has been observed by many authors that the backpropagation (BP) error surfaces usually
consist of a large amount of flat regions as well as extremely steep regions. As such, the BP
algorithm with a fixed learning rate will have low efficiency. This paper considers dynamic
learning rate optimization of the BP algorithm using derivative information. An efficient
method of deriving the first and second derivatives of the objective function with respect to
the learning rate is explored, which does not involve explicit calculation of second-order …
It has been observed by many authors that the backpropagation (BP) error surfaces usually consist of a large amount of flat regions as well as extremely steep regions. As such, the BP algorithm with a fixed learning rate will have low efficiency. This paper considers dynamic learning rate optimization of the BP algorithm using derivative information. An efficient method of deriving the first and second derivatives of the objective function with respect to the learning rate is explored, which does not involve explicit calculation of second-order derivatives in weight space, but rather uses the information gathered from the forward and backward propagation, Several learning rate optimization approaches are subsequently established based on linear expansion of the actual outputs and line searches with acceptable descent value and Newton-like methods, respectively. Simultaneous determination of the optimal learning rate and momentum is also introduced by showing the equivalence between the momentum version BP and the conjugate gradient method. Since these approaches are constructed by simple manipulations of the obtained derivatives, the computational and storage burden scale with the network size exactly like the standard BP algorithm, and the convergence of the BP algorithm is accelerated with in a remarkable reduction (typically by factor 10 to 50, depending upon network architectures and applications) in the running time for the overall learning process. Numerous computer simulation results are provided to support the present approaches.< >
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果