Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment

L Xu, H Xie, SZJ Qin, X Tao, FL Wang - arXiv preprint arXiv:2312.12148, 2023 - arxiv.org
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …

Reft: Representation finetuning for language models

Z Wu, A Arora, Z Wang, A Geiger, D Jurafsky… - arXiv preprint arXiv …, 2024 - arxiv.org
Parameter-efficient fine-tuning (PEFT) methods seek to adapt large models via updates to a
small number of weights. However, much prior interpretability work has shown that …

Parameter-efficient orthogonal finetuning via butterfly factorization

W Liu, Z Qiu, Y Feng, Y Xiu, Y Xue, L Yu… - arXiv preprint arXiv …, 2023 - arxiv.org
Large foundation models are becoming ubiquitous, but training them from scratch is
prohibitively expensive. Thus, efficiently adapting these powerful models to downstream …

Pissa: Principal singular values and singular vectors adaptation of large language models

F Meng, Z Wang, M Zhang - arXiv preprint arXiv:2404.02948, 2024 - arxiv.org
As the parameters of LLMs expand, the computational cost of fine-tuning the entire model
becomes prohibitive. To address this challenge, we introduce a PEFT method, Principal …

Matrix-transformation based low-rank adaptation (mtlora): A brain-inspired method for parameter-efficient fine-tuning

Y Liang, Y Wang, Y Zeng - arXiv preprint arXiv:2403.07440, 2024 - arxiv.org
Fine-tuning techniques based on Large Pretrained Language Models (LPLMs) have been
proven to significantly enhance model performance on a variety of downstream tasks and …

Federated LoRA with Sparse Communication

K Kuo, A Raje, K Rajesh, V Smith - arXiv preprint arXiv:2406.05233, 2024 - arxiv.org
Low-rank adaptation (LoRA) is a natural method for finetuning in communication-
constrained machine learning settings such as cross-device federated learning. Prior work …

CorDA: Context-Oriented Decomposition Adaptation of Large Language Models

Y Yang, X Li, Z Zhou, SL Song, J Wu, L Nie… - arXiv preprint arXiv …, 2024 - arxiv.org
Current parameter-efficient fine-tuning (PEFT) methods build adapters without considering
the context of downstream task to learn, or the context of important knowledge to maintain …

Privacy-preserving fine-tuning of artificial intelligence (ai) foundation models with federated learning, differential privacy, offsite tuning, and parameter-efficient fine …

J Zhao - Authorea Preprints, 2023 - techrxiv.org
Artificial Intelligence (AI) Foundation Models (FMs), pre-trained on massive datasets, have
recently emerged as a pivotal asset in a wide array of tasks. Examples of FMs include Large …

MLAE: Masked LoRA Experts for Parameter-Efficient Fine-Tuning

J Wang, G Yang, W Chen, H Yi, X Wu, Q Lao - arXiv preprint arXiv …, 2024 - arxiv.org
In response to the challenges posed by the extensive parameter updates required for full
fine-tuning of large-scale pre-trained models, parameter-efficient fine-tuning (PEFT) …

SA-FedLora: Adaptive Parameter Allocation for Efficient Federated Learning with LoRA Tuning

Y Yang, X Liu, T Gao, X Xu, G Wang - arXiv preprint arXiv:2405.09394, 2024 - arxiv.org
Fine-tuning large-scale pre-trained models via transfer learning is an emerging important
paradigm for a wide range of downstream tasks, with performance heavily reliant on …