Adapter is all you need for tuning visual tasks

D Yin, L Hu, B Li, Y Zhang - arXiv preprint arXiv:2311.15010, 2023 - arxiv.org
Pre-training & fine-tuning can enhance the transferring efficiency and performance in visual
tasks. Recent delta-tuning methods provide more options for visual classification tasks …

BiGRU-ANN based hybrid architecture for intensified classification tasks with explainable AI

S Chakraborty, MBU Talukder, MM Hasan… - International Journal of …, 2023 - Springer
Artificial Intelligence (AI) is increasingly being employed in critical decision-making
processes such as medical diagnosis, credit approval, criminal justice, and many more …

One model for all domains: collaborative domain-prefix tuning for cross-domain NER

X Chen, L Li, S Qiao, N Zhang, C Tan, Y Jiang… - arXiv preprint arXiv …, 2023 - arxiv.org
Cross-domain NER is a challenging task to address the low-resource problem in practical
scenarios. Previous typical solutions mainly obtain a NER model by pre-trained language …

Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models

TY Zhuo, A Zebaze, N Suppattarachai… - arXiv preprint arXiv …, 2024 - arxiv.org
The high cost of full-parameter fine-tuning (FFT) of Large Language Models (LLMs) has led
to a series of parameter-efficient fine-tuning (PEFT) methods. However, it remains unclear …

5%> 100%: Breaking performance shackles of full fine-tuning on visual recognition tasks

D Yin, L Hu, B Li, Y Zhang, X Yang - arXiv preprint arXiv:2408.08345, 2024 - arxiv.org
Pre-training & fine-tuning can enhance the transferring efficiency and performance in visual
tasks. Recent delta-tuning methods provide more options for visual classification tasks …

Pvp: Pre-trained visual parameter-efficient tuning

Z Song, K Yang, N Guan, J Zhu, P Qiao… - arXiv preprint arXiv …, 2023 - arxiv.org
Large-scale pre-trained transformers have demonstrated remarkable success in various
computer vision tasks. However, it is still highly challenging to fully fine-tune these models …

Break it, Imitate it, Fix it: Robustness by Generating Human-Like Attacks

A Sinha, A Balashankar, A Beirami, T Avrahami… - arXiv preprint arXiv …, 2023 - arxiv.org
Real-world natural language processing systems need to be robust to human adversaries.
Collecting examples of human adversaries for training is an effective but expensive solution …

[图书][B] 自然语言处理导论

张奇, 桂韬, 黄萱菁 - 2023 - intro-nlp.github.io
自然语言处理导论 Page 1 Draft 自然语言处理导论 张奇桂韬黄萱菁 February 15, 2022 Page 2
目录 目录 ii 1 绪论 1 1.1 拟定章节. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 语言模型 3 2.1 语言 …

Jointly reparametrized multi-layer adaptation for efficient and private tuning

U Gupta, A Galstyan, GV Steeg - arXiv preprint arXiv:2305.19264, 2023 - arxiv.org
Efficient finetuning of pretrained language transformers is becoming increasingly prevalent
for solving natural language processing tasks. While effective, it can still require a large …