Optimal dynamic regret in exp-concave online learning

D Baby, YX Wang - Conference on Learning Theory, 2021 - proceedings.mlr.press
Conference on Learning Theory, 2021proceedings.mlr.press
We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online
learning with\emph {exp-concave} losses. We show that whenever improper learning is
allowed, a Strongly Adaptive online learner achieves the dynamic regret of $\tilde
O^*(n^{1/3} C_n^{2/3}\vee 1) $ where $ C_n $ is the\emph {total variation}(aka\emph {path
length}) of the an arbitrary sequence of comparators that may not be known to the learner
ahead of time. Achieving this rate was highly nontrivial even for square losses in 1D where …
Abstract
We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with\emph {exp-concave} losses. We show that whenever improper learning is allowed, a Strongly Adaptive online learner achieves the dynamic regret of where is the\emph {total variation}(aka\emph {path length}) of the an arbitrary sequence of comparators that may not be known to the learner ahead of time. Achieving this rate was highly nontrivial even for square losses in 1D where the best known upper bound was (Yuan and Lamperski, 2019). Our new proof techniques make elegant use of the intricate structures of the primal and dual variables imposed by the KKT conditions and could be of independent interest. Finally, we apply our results to the classical statistical problem of\emph {locally adaptive non-parametric regression}(Mammen, 1991; Donoho and Johnstone, 1998) and obtain a stronger and more flexible algorithm that do not require any statistical assumptions or any hyperparameter tuning.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果