We propose an adaptive variance-reduction method, called AdaSpider, for minimization of $ L $-smooth, non-convex functions with a finite-sum structure. In essence, AdaSpider …
The present contribution deals with decentralized policy evaluation in multi-agent Markov decision processes using temporal-difference (TD) methods with linear function …
With the well-documented popularity of Frank Wolfe (FW) algorithms in machine learning tasks, the present paper establishes links between FW subproblems and the notion of …
Variance reduction (VR) methods for finite-sum minimization typically require the knowledge of problem-dependent constants that are often unknown and difficult to estimate. To address …
Z Liu, TD Nguyen, A Ene… - … Conference on Machine …, 2022 - proceedings.mlr.press
In this paper, we study the finite-sum convex optimization problem focusing on the general convex case. Recently, the study of variance reduced (VR) methods and their accelerated …
B Li, M Ma, GB Giannakis - International Conference on …, 2020 - proceedings.mlr.press
The main theme of this work is a unifying algorithm,\textbf {L} oop\textbf {L} ess\textbf {S} ARAH (L2S) for problems formulated as summation of $ n $ individual loss functions. L2S …
For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster …
This work investigates fault-resilient federated learning when the data samples are non- uniformly distributed across workers, and the number of faulty workers is unknown to the …
Aiming at convex optimization under structural constraints, this work introduces and analyzes a variant of the Frank Wolfe (FW) algorithm termed ExtraFW. The distinct feature of …