Proxskip: Yes! local gradient steps provably lead to communication acceleration! finally! K Mishchenko, G Malinovsky, S Stich, P Richtárik International Conference on Machine Learning, 15750-15769, 2022 | 123 | 2022 |
From local SGD to local fixed-point methods for federated learning G Malinovskiy, D Kovalev, E Gasanov, L Condat, P Richtarik International Conference on Machine Learning, 6692-6701, 2020 | 118 | 2020 |
Variance reduced proxskip: Algorithm, theory and application to federated learning G Malinovsky, K Yi, P Richtárik Advances in Neural Information Processing Systems 35, 15176-15189, 2022 | 22 | 2022 |
Distributed proximal splitting algorithms with rates and acceleration L Condat, G Malinovsky, P Richtárik Frontiers in Signal Processing 1, 776825, 2022 | 22 | 2022 |
Server-side stepsizes and sampling without replacement provably help in federated optimization G Malinovsky, K Mishchenko, P Richtárik Proceedings of the 4th International Workshop on Distributed Machine …, 2023 | 20 | 2023 |
Federated optimization algorithms with random reshuffling and gradient compression A Sadiev, G Malinovsky, E Gorbunov, I Sokolov, A Khaled, K Burlachenko, ... arXiv preprint arXiv:2206.07021, 2022 | 20 | 2022 |
Can 5th generation local training methods support client sampling? yes! M Grudzień, G Malinovsky, P Richtárik International Conference on Artificial Intelligence and Statistics, 1055-1092, 2023 | 15 | 2023 |
Random reshuffling with variance reduction: New analysis and better rates G Malinovsky, A Sailanbayev, P Richtárik Uncertainty in Artificial Intelligence, 1347-1357, 2023 | 14 | 2023 |
A guide through the zoo of biased SGD Y Demidovich, G Malinovsky, I Sokolov, P Richtárik Advances in Neural Information Processing Systems 36, 2024 | 11 | 2024 |
Tamuna: Accelerated federated learning with local training and partial participation LP Condat, G Malinovsky, P Richtárik arXiv, 2023 | 11* | 2023 |
Federated learning with regularized client participation G Malinovsky, S Horváth, K Burlachenko, P Richtárik arXiv preprint arXiv:2302.03662, 2023 | 10 | 2023 |
Improving accelerated federated learning with compression and importance sampling M Grudzień, G Malinovsky, P Richtárik arXiv preprint arXiv:2306.03240, 2023 | 8 | 2023 |
Byzantine robustness and partial participation can be achieved simultaneously: Just clip gradient differences G Malinovsky, P Richtárik, S Horváth, E Gorbunov arXiv preprint arXiv:2311.14127, 2023 | 5 | 2023 |
Federated random reshuffling with compression and variance reduction G Malinovsky, P Richtárik arXiv preprint arXiv:2205.03914, 2022 | 5 | 2022 |
An optimal algorithm for strongly convex min-min optimization A Gasnikov, D Kovalev, G Malinovsky arXiv preprint arXiv:2212.14439, 2022 | 4 | 2022 |
Averaged heavy-ball method Метод тяжелого шарика с усреднением MY Danilova, GS Malinovsky Izhevsk Institute of Computer Science, 2022 | 4* | 2022 |
Minibatch stochastic three points method for unconstrained smooth minimization S Boucherouite, G Malinovsky, P Richtárik, EH Bergou Proceedings of the AAAI Conference on Artificial Intelligence 38 (18), 20344 …, 2024 | 1 | 2024 |
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence IV Modoranu, M Safaryan, G Malinovsky, E Kurtic, T Robert, P Richtarik, ... arXiv preprint arXiv:2405.15593, 2024 | | 2024 |
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction Y Demidovich, G Malinovsky, P Richtárik arXiv preprint arXiv:2403.06677, 2024 | | 2024 |
MAST: Model-Agnostic Sparsified Training Y Demidovich, G Malinovsky, E Shulgin, P Richtárik arXiv preprint arXiv:2311.16086, 2023 | | 2023 |