关注
Zachary Charles
Zachary Charles
Research Scientist, Google
在 google.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Advances and open problems in federated learning
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
Foundations and trends® in machine learning 14 (1–2), 1-210, 2021
55012021
Adaptive federated optimization
S Reddi, Z Charles, M Zaheer, Z Garrett, K Rush, J Konečný, S Kumar, ...
arXiv preprint arXiv:2003.00295, 2020
12712020
Atomo: Communication-efficient learning via atomic sparsification
H Wang, S Sievert, S Liu, Z Charles, D Papailiopoulos, S Wright
Advances in neural information processing systems 31, 2018
3702018
A Field Guide to Federated Optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
3292021
Draco: Byzantine-resilient distributed training via redundant gradients
L Chen, H Wang, Z Charles, D Papailiopoulos
International Conference on Machine Learning, 903-912, 2018
2762018
Mariana Raykova, Dawn Song, Weikang Song, Sebastian U
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth …, 2021
1652021
Stability and generalization of learning algorithms that converge to global optima
Z Charles, D Papailiopoulos
International conference on machine learning, 745-754, 2018
1572018
DETOX: A redundancy-based framework for faster and more robust gradient aggregation
S Rajput, H Wang, Z Charles, D Papailiopoulos
Advances in Neural Information Processing Systems 32, 2019
1222019
On large-cohort training for federated learning
Z Charles, Z Garrett, Z Huo, S Shmulyian, V Smith
Advances in neural information processing systems 34, 20461-20475, 2021
972021
Approximate gradient coding via sparse random graphs
Z Charles, D Papailiopoulos, J Ellenberg
arXiv preprint arXiv:1711.06771, 2017
892017
Advances and open problems in federated learning. arXiv 2019
P Kairouz, HB McMahan, B Avent, A Bellet, M Bennis, AN Bhagoji, ...
arXiv preprint arXiv:1912.04977, 1912
651912
Convergence and accuracy trade-offs in federated learning and meta-learning
Z Charles, J Konečný
International Conference on Artificial Intelligence and Statistics, 2575-2583, 2021
642021
Erasurehead: Distributed gradient descent without delays using approximate gradient coding
H Wang, Z Charles, D Papailiopoulos
arXiv preprint arXiv:1901.09671, 2019
582019
On the outsized importance of learning rates in local update methods
Z Charles, J Konečný
arXiv preprint arXiv:2007.00878, 2020
542020
Gradient coding using the stochastic block model
Z Charles, D Papailiopoulos
2018 IEEE International Symposium on Information Theory (ISIT), 1998-2002, 2018
48*2018
Local adaptivity in federated learning: Convergence and consistency
J Wang, Z Xu, Z Garrett, Z Charles, L Liu, G Joshi
arXiv preprint arXiv:2106.02305, 2021
432021
Does data augmentation lead to positive margin?
S Rajput, Z Feng, Z Charles, PL Loh, D Papailiopoulos
International Conference on Machine Learning, 5321-5330, 2019
422019
Motley: Benchmarking heterogeneity and personalization in federated learning
S Wu, T Li, Z Charles, Y Xiao, Z Liu, Z Xu, V Smith
arXiv preprint arXiv:2206.09262, 2022
372022
A geometric perspective on the transferability of adversarial directions
Z Charles, H Rosenberg, D Papailiopoulos
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
232019
Optimizing the communication-accuracy trade-off in federated learning with rate-distortion theory
N Mitchell, J Ballé, Z Charles, J Konečný
arXiv preprint arXiv:2201.02664, 2022
202022
系统目前无法执行此操作,请稍后再试。
文章 1–20