Decentralized federated learning: A survey and perspective

L Yuan, Z Wang, L Sun, SY Philip… - IEEE Internet of Things …, 2024 - ieeexplore.ieee.org
Federated learning (FL) has been gaining attention for its ability to share knowledge while
maintaining user data, protecting privacy, increasing learning efficiency, and reducing …

A survey of resource-efficient llm and multimodal foundation models

M Xu, W Yin, D Cai, R Yi, D Xu, Q Wang, B Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large foundation models, including large language models (LLMs), vision transformers
(ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine …

Data-juicer: A one-stop data processing system for large language models

D Chen, Y Huang, Z Ma, H Chen, X Pan, C Ge… - Companion of the 2024 …, 2024 - dl.acm.org
The immense evolution in Large Language Models (LLMs) has underscored the importance
of massive, heterogeneous, and high-quality data. A data recipe is a mixture of data from …

Synergizing Foundation Models and Federated Learning: A Survey

S Li, F Ye, M Fang, J Zhao, YH Chan, ECH Ngai… - arXiv preprint arXiv …, 2024 - arxiv.org
The recent development of Foundation Models (FMs), represented by large language
models, vision transformers, and multimodal models, has been making a significant impact …

{FwdLLM}: Efficient Federated Finetuning of Large Language Models with Perturbed Inferences

M Xu, D Cai, Y Wu, X Li, S Wang - 2024 USENIX Annual Technical …, 2024 - usenix.org
Large Language Models (LLMs) are transforming the landscape of mobile intelligence.
Federated Learning (FL), a method to preserve user data privacy, is often employed in fine …

FedRDMA: Communication-Efficient Cross-Silo Federated LLM via Chunked RDMA Transmission

Z Zhang, D Cai, Y Zhang, M Xu, S Wang… - Proceedings of the 4th …, 2024 - dl.acm.org
Communication overhead is a significant bottleneck in federated learning (FL), which has
been exaggerated with the increasing size of AI models. In this paper, we propose …

The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective

Z Qin, D Chen, W Zhang, L Yao, Y Huang… - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid development of large language models (LLMs) has been witnessed in recent
years. Based on the powerful LLMs, multi-modal LLMs (MLLMs) extend the modality from …

Communication-Efficient Byzantine-Resilient Federated Zero-Order Optimization

ASD Neto, M Egger, M Bakshi, R Bitar - arXiv preprint arXiv:2406.14362, 2024 - arxiv.org
We introduce CYBER-0, the first zero-order optimization algorithm for memory-and-
communication efficient Federated Learning, resilient to Byzantine faults. We show through …

A Survey of Backpropagation-free Training For LLMS

H Mei, D Cai, Y Wu, S Wang, M Xu - Authorea Preprints, 2024 - techrxiv.org
Large language models (LLMs) have achieved remarkable performance in various
downstream tasks. However, training LLMs is computationally expensive and requires a …