Adaptive compression for communication-efficient distributed training

M Makarenko, E Gasanov, R Islamov, A Sadiev… - arXiv preprint arXiv …, 2022 - arxiv.org
We propose Adaptive Compressed Gradient Descent (AdaCGD)-a novel optimization
algorithm for communication-efficient training of supervised machine learning models with …

Inclusive Data Representation in Federated Learning: A Novel Approach Integrating Textual and Visual Prompt

Z Zhao, Z Shi, Y Liu, W Ding - Adjunct Proceedings of the 2023 ACM …, 2023 - dl.acm.org
Federated Learning (FL) is often impeded by communication overhead issues. Prompt
tuning, as a potential solution, has been introduced to only adjust a few trainable parameters …