Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without …
A Khaled, C Jin - arXiv preprint arXiv:2209.02257, 2022 - arxiv.org
Federated learning (FL) is a subfield of machine learning where multiple clients try to collaboratively learn a model over a network under communication constraints. We consider …
Training deep networks on large-scale datasets is computationally challenging. This work explores the problem of “how to accelerate adaptive gradient algorithms in a general …
Z Zhang, S Zhou - Information Sciences, 2023 - Elsevier
Estimating sequences introduced by Nesterov is an efficient trick to accelerate gradient descent (GD). The stochastic version of estimating sequences is also successfully used to …
In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical …
Meta-Reinforcement Learning (MRL) is a promising framework for training agents that can quickly adapt to new environments and tasks. In this work, we study the MRL problem under …
Solving systems of linear equations is a fundamental problem, but it can be computationally intensive for classical algorithms in high dimensions. Existing quantum algorithms can …
The impact that data science has on everyday life is incommensurable and continuously expanding. Imaging, language translation, self-driving cars, ChatGPT are some examples …
In this work, we investigate two aspects of large-scale optimization for convex functions defined on an infinite-dimensional separable Hilbert space: parallelized methods and …