Sensitivity-aware visual parameter-efficient fine-tuning

H He, J Cai, J Zhang, D Tao… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Visual Parameter-Efficient Fine-Tuning (PEFT) has become a powerful alternative
for full fine-tuning so as to adapt pre-trained vision models to downstream tasks, which only …

Parameter-efficient fine-tuning for pre-trained vision models: A survey

Y Xin, S Luo, H Zhou, J Du, X Liu, Y Fan, Q Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Large-scale pre-trained vision models (PVMs) have shown great potential for adaptability
across various downstream vision tasks. However, with state-of-the-art PVMs growing to …

Sct: A simple baseline for parameter-efficient fine-tuning via salient channels

HH Zhao, P Wang, Y Zhao, H Luo, F Wang… - International Journal of …, 2024 - Springer
Pre-trained vision transformers have strong representations benefit to various downstream
tasks. Recently many parameter-efficient fine-tuning (PEFT) methods have been proposed …

Fast trainable projection for robust fine-tuning

J Tian, YC Liu, JS Smith, Z Kira - Advances in Neural …, 2024 - proceedings.neurips.cc
Robust fine-tuning aims to achieve competitive in-distribution (ID) performance while
maintaining the out-of-distribution (OOD) robustness of a pre-trained model when …

Improved visual fine-tuning with natural language supervision

J Wang, Y Xu, J Hu, M Yan, J Sang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Fine-tuning a visual pre-trained model can leverage the semantic information from large-
scale pre-training data and mitigate the over-fitting problem on downstream vision tasks with …

Scaling & shifting your features: A new baseline for efficient model tuning

D Lian, D Zhou, J Feng, X Wang - Advances in Neural …, 2022 - proceedings.neurips.cc
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-
tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers …

Towards efficient visual adaption via structural re-parameterization

G Luo, M Huang, Y Zhou, X Sun, G Jiang… - arXiv preprint arXiv …, 2023 - arxiv.org
Parameter-efficient transfer learning (PETL) is an emerging research spot aimed at
inexpensively adapting large-scale pre-trained models to downstream tasks. Recent …

Fedtune: A deep dive into efficient federated fine-tuning with pre-trained transformers

J Chen, W Xu, S Guo, J Wang, J Zhang… - arXiv preprint arXiv …, 2022 - arxiv.org
Federated Learning (FL) is an emerging paradigm that enables distributed users to
collaboratively and iteratively train machine learning models without sharing their private …

Dora: Weight-decomposed low-rank adaptation

SY Liu, CY Wang, H Yin, P Molchanov… - arXiv preprint arXiv …, 2024 - arxiv.org
Among the widely used parameter-efficient finetuning (PEFT) methods, LoRA and its
variants have gained considerable popularity because of avoiding additional inference …

Revisiting the parameter efficiency of adapters from the perspective of precision redundancy

S Jie, H Wang, ZH Deng - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-
trained vision models. However, with the exponential growth of model sizes, the …