Advances and open problems in federated learning P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ... Foundations and trends® in machine learning 14 (1–2), 1-210, 2021 | 5501 | 2021 |
Adaptive federated optimization S Reddi, Z Charles, M Zaheer, Z Garrett, K Rush, J Konečný, S Kumar, ... arXiv preprint arXiv:2003.00295, 2020 | 1271 | 2020 |
Atomo: Communication-efficient learning via atomic sparsification H Wang, S Sievert, S Liu, Z Charles, D Papailiopoulos, S Wright Advances in neural information processing systems 31, 2018 | 370 | 2018 |
A Field Guide to Federated Optimization J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ... arXiv preprint arXiv:2107.06917, 2021 | 329 | 2021 |
Draco: Byzantine-resilient distributed training via redundant gradients L Chen, H Wang, Z Charles, D Papailiopoulos International Conference on Machine Learning, 903-912, 2018 | 276 | 2018 |
Mariana Raykova, Dawn Song, Weikang Song, Sebastian U P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ... Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth …, 2021 | 165 | 2021 |
Stability and generalization of learning algorithms that converge to global optima Z Charles, D Papailiopoulos International conference on machine learning, 745-754, 2018 | 157 | 2018 |
DETOX: A redundancy-based framework for faster and more robust gradient aggregation S Rajput, H Wang, Z Charles, D Papailiopoulos Advances in Neural Information Processing Systems 32, 2019 | 122 | 2019 |
On large-cohort training for federated learning Z Charles, Z Garrett, Z Huo, S Shmulyian, V Smith Advances in neural information processing systems 34, 20461-20475, 2021 | 97 | 2021 |
Approximate gradient coding via sparse random graphs Z Charles, D Papailiopoulos, J Ellenberg arXiv preprint arXiv:1711.06771, 2017 | 89 | 2017 |
Advances and open problems in federated learning. arXiv 2019 P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ... arXiv preprint arXiv:1912.04977, 1912 | 65 | 1912 |
Convergence and accuracy trade-offs in federated learning and meta-learning Z Charles, J Konečný International Conference on Artificial Intelligence and Statistics, 2575-2583, 2021 | 64 | 2021 |
Erasurehead: Distributed gradient descent without delays using approximate gradient coding H Wang, Z Charles, D Papailiopoulos arXiv preprint arXiv:1901.09671, 2019 | 58 | 2019 |
On the outsized importance of learning rates in local update methods Z Charles, J Konečný arXiv preprint arXiv:2007.00878, 2020 | 54 | 2020 |
Gradient coding using the stochastic block model Z Charles, D Papailiopoulos 2018 IEEE International Symposium on Information Theory (ISIT), 1998-2002, 2018 | 48* | 2018 |
Local adaptivity in federated learning: Convergence and consistency J Wang, Z Xu, Z Garrett, Z Charles, L Liu, G Joshi arXiv preprint arXiv:2106.02305, 2021 | 43 | 2021 |
Does data augmentation lead to positive margin? S Rajput, Z Feng, Z Charles, PL Loh, D Papailiopoulos International Conference on Machine Learning, 5321-5330, 2019 | 42 | 2019 |
Motley: Benchmarking heterogeneity and personalization in federated learning S Wu, T Li, Z Charles, Y Xiao, Z Liu, Z Xu, V Smith arXiv preprint arXiv:2206.09262, 2022 | 37 | 2022 |
A geometric perspective on the transferability of adversarial directions Z Charles, H Rosenberg, D Papailiopoulos The 22nd International Conference on Artificial Intelligence and Statistics …, 2019 | 23 | 2019 |
Optimizing the communication-accuracy trade-off in federated learning with rate-distortion theory N Mitchell, J Ballé, Z Charles, J Konečný arXiv preprint arXiv:2201.02664, 2022 | 20 | 2022 |