Emerging vehicular applications with strict latency and reliability requirements pose high computing requirements, and current vehicles' computational resources are not adequate to …
The predominant paradigm for using machine learning models on a device is to train a model in the cloud and perform inference using the trained model on the device. However …
We study the mean estimation problem under communication and local differential privacy constraints. While previous work has proposed order-optimal algorithms for the same …
Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable …
Most studies in cross-device federated learning focus on small models, due to the server- client communication and on-device computation bottlenecks. In this work, we leverage …
Federated clustering (FC) is an unsupervised learning problem that arises in a number of practical applications, including personalized recommender and healthcare systems. With …
Y He, X Huang, K Yuan - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Communication compression is a common technique in distributed optimization that can alleviate communication overhead by transmitting compressed gradients and model …
L Chen, Y Ma, J Zhang - arXiv preprint arXiv:2306.14853, 2023 - arxiv.org
Bilevel optimization has various applications such as hyper-parameter optimization and meta-learning. Designing theoretically efficient algorithms for bilevel optimization is more …
In large-scale machine learning, recent works have studied the effects of compressing gradients in stochastic optimization in order to alleviate the communication bottleneck …