Benchopt: Reproducible, efficient and collaborative optimization benchmarks

T Moreau, M Massias, A Gramfort… - Advances in …, 2022 - proceedings.neurips.cc
Numerical validation is at the core of machine learning research as it allows us to assess the
actual impact of new methods, and to confirm the agreement between theory and practice …

Deepobs: A deep learning optimizer benchmark suite

F Schneider, L Balles, P Hennig - arXiv preprint arXiv:1903.05499, 2019 - arxiv.org
Because the choice and tuning of the optimizer affects the speed, and ultimately the
performance of deep learning, there is significant past and recent research in this area. Yet …

Descending through a crowded valley-benchmarking deep learning optimizers

RM Schmidt, F Schneider… - … Conference on Machine …, 2021 - proceedings.mlr.press
Choosing the optimizer is considered to be among the most crucial design decisions in deep
learning, and it is not an easy one. The growing literature now lists hundreds of optimization …

AGD: an auto-switchable optimizer using stepwise gradient difference for preconditioning matrix

Y Yue, Z Ye, J Jiang, Y Liu… - Advances in Neural …, 2023 - proceedings.neurips.cc
Adaptive optimizers, such as Adam, have achieved remarkable success in deep learning. A
key component of these optimizers is the so-called preconditioning matrix, providing …

Efficient non-parametric optimizer search for diverse tasks

R Wang, Y Xiong, M Cheng… - Advances in Neural …, 2022 - proceedings.neurips.cc
Efficient and automated design of optimizers plays a crucial role in full-stack AutoML
systems. However, prior methods in optimizer search are often limited by their scalability …

Neorl: Neuroevolution optimization with reinforcement learning

MI Radaideh, K Du, P Seurin, D Seyler, X Gu… - arXiv preprint arXiv …, 2021 - arxiv.org
We present an open-source Python framework for NeuroEvolution Optimization with
Reinforcement Learning (NEORL) developed at the Massachusetts Institute of Technology …

Optimizer's Information Criterion: Dissecting and Correcting Bias in Data-Driven Optimization

G Iyengar, H Lam, T Wang - arXiv preprint arXiv:2306.10081, 2023 - arxiv.org
In data-driven optimization, the sample performance of the obtained decision typically incurs
an optimistic bias against the true performance, a phenomenon commonly known as the …

When do flat minima optimizers work?

J Kaddour, L Liu, R Silva… - Advances in Neural …, 2022 - proceedings.neurips.cc
Recently, flat-minima optimizers, which seek to find parameters in low-loss neighborhoods,
have been shown to improve a neural network's generalization performance over stochastic …

Neuroevobench: Benchmarking evolutionary optimizers for deep learning applications

R Lange, Y Tang, Y Tian - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Abstract Recently, the Deep Learning community has become interested in evolutionary
optimization (EO) as a means to address hard optimization problems, eg meta-learning …

Do optimization methods in deep learning applications matter?

BM Ozyildirim, M Kiran - arXiv preprint arXiv:2002.12642, 2020 - arxiv.org
With advances in deep learning, exponential data growth and increasing model complexity,
developing efficient optimization methods are attracting much research attention. Several …