Universal online learning with gradient variations: A multi-layer online ensemble approach

YH Yan, P Zhao, ZH Zhou - Advances in Neural Information …, 2024 - proceedings.neurips.cc
In this paper, we propose an online convex optimization approach with two different levels of
adaptivity. On a higher level, our approach is agnostic to the unknown types and curvatures …

Optimistic online mirror descent for bridging stochastic and adversarial online convex optimization

S Chen, YJ Zhang, WW Tu, P Zhao, L Zhang - Journal of Machine Learning …, 2024 - jmlr.org
The stochastically extended adversarial (SEA) model, introduced by Sachs et al.(2022),
serves as an interpolation between stochastic and adversarial online convex optimization …

Fast rates in time-varying strongly monotone games

YH Yan, P Zhao, ZH Zhou - International Conference on …, 2023 - proceedings.mlr.press
Multi-player online games depict the interaction of multiple players with each other over
time. Strongly monotone games are of particular interest since they have benign properties …

Dual adaptivity: A universal algorithm for minimizing the adaptive regret of convex functions

L Zhang, G Wang, WW Tu, W Jiang… - Advances in Neural …, 2021 - proceedings.neurips.cc
To deal with changing environments, a new performance measure—adaptive regret, defined
as the maximum static regret over any interval, was proposed in online learning. Under the …

Universal Gradient Methods for Stochastic Convex Optimization

A Rodomanov, A Kavis, Y Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
We develop universal gradient methods for Stochastic Convex Optimization (SCO). Our
algorithms automatically adapt not only to the oracle's noise but also to the H\" older …

Nonstationary online convex optimization with multiple predictions

Q Meng, J Liu - Information Sciences, 2024 - Elsevier
This work focuses on dynamic regret for non-stationary online convex optimization with full
information. State-of-the-art analysis shows that Implicit Online Mirror Descent (IOMD) …

Universal Online Convex Optimization with Projection per Round

W Yang, Y Wang, P Zhao, L Zhang - arXiv preprint arXiv:2405.19705, 2024 - arxiv.org
To address the uncertainty in function types, recent progress in online convex optimization
(OCO) has spurred the development of universal algorithms that simultaneously attain …

Nearly optimal algorithms with sublinear computational complexity for online kernel regression

J Li, S Liao - International Conference on Machine Learning, 2023 - proceedings.mlr.press
The trade-off between regret and computational cost is a fundamental problem for online
kernel regression, and previous algorithms worked on the trade-off can not keep optimal …

Contaminated Online Convex Optimization

T Kamijima, S Ito - arXiv preprint arXiv:2404.18093, 2024 - arxiv.org
In the field of online convex optimization, some efficient algorithms have been designed for
each of the individual classes of objective functions, eg, convex, strongly convex, and exp …

Fast Rates in Online Convex Optimization by Exploiting the Curvature of Feasible Sets

T Tsuchiya, S Ito - arXiv preprint arXiv:2402.12868, 2024 - arxiv.org
In this paper, we explore online convex optimization (OCO) and introduce a new analysis
that provides fast rates by exploiting the curvature of feasible sets. In online linear …