Parameter-efficient fine-tuning for large models: A comprehensive survey

Z Han, C Gao, J Liu, SQ Zhang - arXiv preprint arXiv:2403.14608, 2024 - arxiv.org
Large models represent a groundbreaking advancement in multiple application fields,
enabling remarkable achievements across various tasks. However, their unprecedented …

End-edge-cloud collaborative computing for deep learning: A comprehensive survey

Y Wang, C Yang, S Lan, L Zhu… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
The booming development of deep learning applications and services heavily relies on
large deep learning models and massive data in the cloud. However, cloud-based deep …

Loraprune: Pruning meets low-rank parameter-efficient fine-tuning

M Zhang, H Chen, C Shen, Z Yang, L Ou, X Yu… - arXiv preprint arXiv …, 2023 - arxiv.org
Large pre-trained models (LPMs), such as LLaMA and GLM, have shown exceptional
performance across various tasks through fine-tuning. Although low-rank adaption (LoRA) …

A survey of resource-efficient llm and multimodal foundation models

M Xu, W Yin, D Cai, R Yi, D Xu, Q Wang, B Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large foundation models, including large language models (LLMs), vision transformers
(ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine …

Democaricature: Democratising caricature generation with a rough sketch

DY Chen, AK Bhunia, S Koley, A Sain… - Proceedings of the …, 2024 - openaccess.thecvf.com
In this paper we democratise caricature generation empowering individuals to effortlessly
craft personalised caricatures with just a photo and a conceptual sketch. Our objective is to …

Low-Resource Vision Challenges for Foundation Models

Y Zhang, H Doughty… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Low-resource settings are well-established in natural lan-guage processing where many
languages lack sufficient data for deep learning at scale. However low-resource problems …

Gradient-based Parameter Selection for Efficient Fine-Tuning

Z Zhang, Q Zhang, Z Gao, R Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
With the growing size of pre-trained models full fine-tuning and storing all the parameters for
various downstream tasks is costly and infeasible. In this paper we propose a new …

Efficient tuning and inference for large language models on textual graphs

Y Zhu, Y Wang, H Shi, S Tang - arXiv preprint arXiv:2401.15569, 2024 - arxiv.org
Rich textual and topological information of textual graphs need to be modeled in real-world
applications such as webpages, e-commerce, and academic articles. Practitioners have …

Adapters Strike Back

JMO Steitz, S Roth - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Adapters provide an efficient and lightweight mechanism for adapting trained transformer
models to a variety of different tasks. However they have often been found to be …

Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer

Y Tan, Q Zhou, X Xiang, K Wang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Class-incremental learning (CIL) aims to enable models to continuously learn new classes
while overcoming catastrophic forgetting. The introduction of pre-trained models has brought …