Parameter-efficient fine-tuning for large models: A comprehensive survey

Z Han, C Gao, J Liu, J Zhang, SQ Zhang - arXiv preprint arXiv:2403.14608, 2024 - arxiv.org
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …

Boosting continual learning of vision-language models via mixture-of-experts adapters

J Yu, Y Zhuge, L Zhang, P Hu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Continual learning can empower vision-language models to continuously acquire new
knowledge without the need for access to the entire historical dataset. However mitigating …

Unipt: Universal parallel tuning for transfer learning with efficient parameter and memory

H Diao, B Wan, Y Zhang, X Jia… - Proceedings of the …, 2024 - openaccess.thecvf.com
Parameter-efficient transfer learning (PETL) ie fine-tuning a small portion of parameters is an
effective strategy for adapting pre-trained models to downstream domains. To further reduce …

A survey of resource-efficient llm and multimodal foundation models

M Xu, W Yin, D Cai, R Yi, D Xu, Q Wang, B Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large foundation models, including large language models (LLMs), vision transformers
(ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine …

Resource-efficient Algorithms and Systems of Foundation Models: A Survey

M Xu, D Cai, W Yin, S Wang, X Jin, X Liu - ACM Computing Surveys, 2024 - dl.acm.org
Large foundation models, including large language models, vision transformers, diffusion,
and LLM-based multimodal models, are revolutionizing the entire machine learning …

Personalized federated continual learning via multi-granularity prompt

H Yu, X Yang, X Gao, Y Kang, H Wang… - Proceedings of the 30th …, 2024 - dl.acm.org
Personalized Federated Continual Learning (PFCL) is a new practical scenario that poses
greater challenges in sharing and personalizing knowledge. PFCL not only relies on …

Descriptor and Word Soups: Overcoming the Parameter Efficiency Accuracy Tradeoff for Out-of-Distribution Few-shot Learning

C Liao, T Tsiligkaridis, B Kulis - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Over the past year a large body of multimodal research has emerged around zero-shot
evaluation using GPT descriptors. These studies boost the zero-shot accuracy of pretrained …

Sherl: Synthesizing high accuracy and efficient memory for resource-limited transfer learning

H Diao, B Wan, X Jia, Y Zhuge, Y Zhang, H Lu… - … on Computer Vision, 2025 - Springer
Parameter-efficient transfer learning (PETL) has emerged as a flourishing research field for
adapting large pre-trained models to downstream tasks, greatly reducing trainable …

LLaMA-Excitor: General Instruction Tuning via Indirect Feature Interaction

B Zou, C Yang, Y Qiao, C Quan… - Proceedings of the …, 2024 - openaccess.thecvf.com
Existing methods to fine-tune LLMs like Adapter Prefix-tuning and LoRA which introduce
extra modules or additional input sequences to inject new skills or knowledge may …

Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank Bottlenecks

T Qu, T Tuytelaars, MF Moens - European Conference on Computer …, 2025 - Springer
Mainstream parameter-efficient fine-tuning (PEFT) methods, such as LoRA or Adapter,
project a model's hidden states to a lower dimension, allowing pre-trained models to adapt …