B Li, G Giannakis - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Sharpness-aware minimization (SAM) has well documented merits in enhancing generalization of deep neural networks, even without sizable data augmentation. Embracing …
We study finite-sum distributed optimization problems involving a master node and $ n-1$ local nodes under the popular $\delta $-similarity and $\mu $-strong convexity conditions …
Y Han, G Xie, Z Zhang - Journal of Machine Learning Research, 2024 - jmlr.org
In this paper we study the lower complexity bounds for finite-sum optimization problems, where the objective is the average of $ n $ individual component functions. We consider a …
In this work, we introduce a novel stochastic proximal alternating linearized minimization algorithm [J. Bolte, S. Sabach, and M. Teboulle, Math. Program., 146 (2014), pp. 459--494] …
The present contribution deals with decentralized policy evaluation in multi-agent Markov decision processes using temporal-difference (TD) methods with linear function …
A Khaled, C Jin - arXiv preprint arXiv:2209.02257, 2022 - arxiv.org
Federated learning (FL) is a subfield of machine learning where multiple clients try to collaboratively learn a model over a network under communication constraints. We consider …
B Li, L Wang, GB Giannakis - International conference on …, 2020 - proceedings.mlr.press
The variance reduction class of algorithms including the representative ones, SVRG and SARAH, have well documented merits for empirical risk minimization problems. However …
In this paper, for solving a broad class of large-scale nonconvex and nonsmooth optimization problems, we propose a stochastic two-step inertial Bregman proximal …
K Zhou, L Tian, AMC So… - … Conference on Artificial …, 2022 - proceedings.mlr.press
In convex optimization, the problem of finding near-stationary points has not been adequately studied yet, unlike other optimality measures such as the function value. Even in …