Barzilai-borwein step size for stochastic gradient descent

C Tan, S Ma, YH Dai, Y Qian - Advances in neural …, 2016 - proceedings.neurips.cc
Advances in neural information processing systems, 2016proceedings.neurips.cc
One of the major issues in stochastic gradient descent (SGD) methods is how to choose an
appropriate step size while running the algorithm. Since the traditional line search technique
does not apply for stochastic optimization methods, the common practice in SGD is either to
use a diminishing step size, or to tune a step size by hand, which can be time consuming in
practice. In this paper, we propose to use the Barzilai-Borwein (BB) method to automatically
compute step sizes for SGD and its variant: stochastic variance reduced gradient (SVRG) …
Abstract
One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm. Since the traditional line search technique does not apply for stochastic optimization methods, the common practice in SGD is either to use a diminishing step size, or to tune a step size by hand, which can be time consuming in practice. In this paper, we propose to use the Barzilai-Borwein (BB) method to automatically compute step sizes for SGD and its variant: stochastic variance reduced gradient (SVRG) method, which leads to two algorithms: SGD-BB and SVRG-BB. We prove that SVRG-BB converges linearly for strongly convex objective functions. As a by-product, we prove the linear convergence result of SVRG with Option I proposed in [10], whose convergence result has been missing in the literature. Numerical experiments on standard data sets show that the performance of SGD-BB and SVRG-BB is comparable to and sometimes even better than SGD and SVRG with best-tuned step sizes, and is superior to some advanced SGD variants.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果