Stochastic training is not necessary for generalization

J Geiping, M Goldblum, PE Pope, M Moeller… - arXiv preprint arXiv …, 2021 - arxiv.org
arXiv preprint arXiv:2109.14119, 2021arxiv.org
It is widely believed that the implicit regularization of SGD is fundamental to the impressive
generalization behavior we observe in neural networks. In this work, we demonstrate that
non-stochastic full-batch training can achieve comparably strong performance to SGD on
CIFAR-10 using modern architectures. To this end, we show that the implicit regularization of
SGD can be completely replaced with explicit regularization even when comparing against a
strong and well-researched baseline. Our observations indicate that the perceived difficulty …
It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks. In this work, we demonstrate that non-stochastic full-batch training can achieve comparably strong performance to SGD on CIFAR-10 using modern architectures. To this end, we show that the implicit regularization of SGD can be completely replaced with explicit regularization even when comparing against a strong and well-researched baseline. Our observations indicate that the perceived difficulty of full-batch training may be the result of its optimization properties and the disproportionate time and effort spent by the ML community tuning optimizers and hyperparameters for small-batch training.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果