Pre-Training and Personalized Fine-Tuning via Over-the-Air Federated Meta-Learning: Convergence-Generalization Trade-Offs

H Wen, H Xing, O Simeone - arXiv preprint arXiv:2406.11569, 2024 - arxiv.org
For modern artificial intelligence (AI) applications such as large language models (LLMs),
the training paradigm has recently shifted to pre-training followed by fine-tuning …

Personalized wireless federated learning for large language models

F Jiang, L Dong, S Tu, Y Peng, K Wang, K Yang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have revolutionized natural language processing tasks.
However, their deployment in wireless networks still face challenges, ie, a lack of privacy …

FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts

H Mei, D Cai, A Zhou, S Wang, M Xu - arXiv preprint arXiv:2408.11304, 2024 - arxiv.org
As Large Language Models (LLMs) push the boundaries of AI capabilities, their demand for
data is growing. Much of this data is private and distributed across edge devices, making …

FeDeRA: Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition

Y Yan, S Tang, Z Shi, Q Yang - arXiv preprint arXiv:2404.18848, 2024 - arxiv.org
Pre-trained Language Models (PLMs) have shown excellent performance on various
downstream tasks after fine-tuning. Nevertheless, the escalating concerns surrounding user …

FedLoRA: When Personalized Federated Learning Meets Low-Rank Adaptation

X Wu, X Liu, J Niu, H Wang, S Tang, G Zhu - 2024 - openreview.net
In this research paper, we introduce a novel approach to Personalized Federated Learning
(PFL), which we call FedLoRA. This approach is inspired by recent advancements in fine …

Client-customized adaptation for parameter-efficient federated learning

Y Kim, J Kim, WL Mok, JH Park… - Findings of the …, 2023 - aclanthology.org
Despite the versatility of pre-trained language models (PLMs) across domains, their large
memory footprints pose significant challenges in federated learning (FL), where the training …

Thinking Forward: Memory-Efficient Federated Finetuning of Language Models

K Panchal, N Parikh, S Choudhary, L Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Finetuning large language models (LLMs) in federated learning (FL) settings has become
important as it allows resource-constrained devices to finetune a model using private data …

On the convergence of zeroth-order federated tuning for large language models

Z Ling, D Chen, L Yao, Y Li, Y Shen - Proceedings of the 30th ACM …, 2024 - dl.acm.org
The confluence of Federated Learning (FL) and Large Language Models (LLMs) is ushering
in a new era in privacy-preserving natural language processing. However, the intensive …

Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning

W Kuang, B Qian, Z Li, D Chen, D Gao, X Pan… - Proceedings of the 30th …, 2024 - dl.acm.org
Large language models (LLMs) have demonstrated great capabilities in various natural
language understanding and generation tasks. These pre-trained LLMs can be further …

Federated fine-tuning of large language models under heterogeneous language tasks and client resources

J Bai, D Chen, B Qian, L Yao, Y Li - arXiv preprint arXiv:2402.11505, 2024 - arxiv.org
Federated Learning (FL) has recently been applied to the parameter-efficient fine-tuning of
Large Language Models (LLMs). While promising, it raises significant challenges due to the …