J Yu, Y Zhuge, L Zhang, P Hu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Continual learning can empower vision-language models to continuously acquire new knowledge without the need for access to the entire historical dataset. However mitigating …
Parameter-efficient transfer learning (PETL) ie fine-tuning a small portion of parameters is an effective strategy for adapting pre-trained models to downstream domains. To further reduce …
Large foundation models, including large language models (LLMs), vision transformers (ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine …
Large foundation models, including large language models, vision transformers, diffusion, and LLM-based multimodal models, are revolutionizing the entire machine learning …
Personalized Federated Continual Learning (PFCL) is a new practical scenario that poses greater challenges in sharing and personalizing knowledge. PFCL not only relies on …
Over the past year a large body of multimodal research has emerged around zero-shot evaluation using GPT descriptors. These studies boost the zero-shot accuracy of pretrained …
H Diao, B Wan, X Jia, Y Zhuge, Y Zhang, H Lu… - … on Computer Vision, 2025 - Springer
Parameter-efficient transfer learning (PETL) has emerged as a flourishing research field for adapting large pre-trained models to downstream tasks, greatly reducing trainable …
B Zou, C Yang, Y Qiao, C Quan… - Proceedings of the …, 2024 - openaccess.thecvf.com
Existing methods to fine-tune LLMs like Adapter Prefix-tuning and LoRA which introduce extra modules or additional input sequences to inject new skills or knowledge may …
Mainstream parameter-efficient fine-tuning (PEFT) methods, such as LoRA or Adapter, project a model's hidden states to a lower dimension, allowing pre-trained models to adapt …