Y Xin, S Luo, H Zhou, J Du, X Liu, Y Fan, Q Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Large-scale pre-trained vision models (PVMs) have shown great potential for adaptability across various downstream vision tasks. However, with state-of-the-art PVMs growing to …
Pre-trained vision transformers have strong representations benefit to various downstream tasks. Recently many parameter-efficient fine-tuning (PEFT) methods have been proposed …
Robust fine-tuning aims to achieve competitive in-distribution (ID) performance while maintaining the out-of-distribution (OOD) robustness of a pre-trained model when …
J Wang, Y Xu, J Hu, M Yan, J Sang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Fine-tuning a visual pre-trained model can leverage the semantic information from large- scale pre-training data and mitigate the over-fitting problem on downstream vision tasks with …
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine- tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers …
Parameter-efficient transfer learning (PETL) is an emerging research spot aimed at inexpensively adapting large-scale pre-trained models to downstream tasks. Recent …
Federated Learning (FL) is an emerging paradigm that enables distributed users to collaboratively and iteratively train machine learning models without sharing their private …
Among the widely used parameter-efficient finetuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference …
S Jie, H Wang, ZH Deng - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Current state-of-the-art results in computer vision depend in part on fine-tuning large pre- trained vision models. However, with the exponential growth of model sizes, the …