Win: Weight-decay-integrated nesterov acceleration for adaptive gradient algorithms

P Zhou, X Xie, S Yan - 2023 - ink.library.smu.edu.sg
Training deep networks on large-scale datasets is computationally challenging. In this work,
we explore the problem of “how to accelerate adaptive gradient algorithms in a general …

Adaptive federated learning with auto-tuned clients

JL Kim, MT Toghani, CA Uribe, A Kyrillidis - arXiv preprint arXiv …, 2023 - arxiv.org
Federated learning (FL) is a distributed machine learning framework where the global model
of a central server is trained via multiple collaborative steps by participating clients without …

Faster federated optimization under second-order similarity

A Khaled, C Jin - arXiv preprint arXiv:2209.02257, 2022 - arxiv.org
Federated learning (FL) is a subfield of machine learning where multiple clients try to
collaboratively learn a model over a network under communication constraints. We consider …

Win: Weight-Decay-Integrated Nesterov Acceleration for Faster Network Training

P Zhou, X Xie, Z Lin, KC Toh, S Yan - Journal of Machine Learning …, 2024 - jmlr.org
Training deep networks on large-scale datasets is computationally challenging. This work
explores the problem of “how to accelerate adaptive gradient algorithms in a general …

Adaptive proximal SGD based on new estimating sequences for sparser ERM

Z Zhang, S Zhou - Information Sciences, 2023 - Elsevier
Estimating sequences introduced by Nesterov is an efficient trick to accelerate gradient
descent (GD). The stochastic version of estimating sequences is also successfully used to …

Variance reduction techniques for stochastic proximal point algorithms

C Traoré, V Apidopoulos, S Salzo, S Villa - Journal of Optimization Theory …, 2024 - Springer
In the context of finite sums minimization, variance reduction techniques are widely used to
improve the performance of state-of-the-art stochastic gradient methods. Their practical …

On first-order meta-reinforcement learning with moreau envelopes

MT Toghani, S Perez-Salazar… - 2023 62nd IEEE …, 2023 - ieeexplore.ieee.org
Meta-Reinforcement Learning (MRL) is a promising framework for training agents that can
quickly adapt to new environments and tasks. In this work, we study the MRL problem under …

A Catalyst Framework for the Quantum Linear System Problem via the Proximal Point Algorithm

JL Kim, NH Chia, A Kyrillidis - arXiv preprint arXiv:2406.13879, 2024 - arxiv.org
Solving systems of linear equations is a fundamental problem, but it can be computationally
intensive for classical algorithms in high dimensions. Existing quantum algorithms can …

[PDF][PDF] Genuense Athenaeum

C Traoré - 2024 - cheiktraore.com
The impact that data science has on everyday life is incommensurable and continuously
expanding. Imaging, language translation, self-driving cars, ChatGPT are some examples …

Large-scale convex optimization: parallelization and variance reduction

MCI Traore - 2024 - tesidottorato.depositolegale.it
In this work, we investigate two aspects of large-scale optimization for convex functions
defined on an infinite-dimensional separable Hilbert space: parallelized methods and …