S Chakraborty, MBU Talukder, MM Hasan… - International Journal of …, 2023 - Springer
Artificial Intelligence (AI) is increasingly being employed in critical decision-making processes such as medical diagnosis, credit approval, criminal justice, and many more …
Cross-domain NER is a challenging task to address the low-resource problem in practical scenarios. Previous typical solutions mainly obtain a NER model by pre-trained language …
TY Zhuo, A Zebaze, N Suppattarachai… - arXiv preprint arXiv …, 2024 - arxiv.org
The high cost of full-parameter fine-tuning (FFT) of Large Language Models (LLMs) has led to a series of parameter-efficient fine-tuning (PEFT) methods. However, it remains unclear …
D Yin, L Hu, B Li, Y Zhang, X Yang - arXiv preprint arXiv:2408.08345, 2024 - arxiv.org
Pre-training & fine-tuning can enhance the transferring efficiency and performance in visual tasks. Recent delta-tuning methods provide more options for visual classification tasks …
Z Song, K Yang, N Guan, J Zhu, P Qiao… - arXiv preprint arXiv …, 2023 - arxiv.org
Large-scale pre-trained transformers have demonstrated remarkable success in various computer vision tasks. However, it is still highly challenging to fully fine-tune these models …
Real-world natural language processing systems need to be robust to human adversaries. Collecting examples of human adversaries for training is an effective but expensive solution …
Efficient finetuning of pretrained language transformers is becoming increasingly prevalent for solving natural language processing tasks. While effective, it can still require a large …