Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment

L Xu, H Xie, SZJ Qin, X Tao, FL Wang - arXiv preprint arXiv:2312.12148, 2023 - arxiv.org
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …

Emotionqueen: A benchmark for evaluating empathy of large language models

Y Chen, H Wang, S Yan, S Liu, Y Li, Y Zhao… - arXiv preprint arXiv …, 2024 - arxiv.org
Emotional intelligence in large language models (LLMs) is of great importance in Natural
Language Processing. However, the previous research mainly focus on basic sentiment …

Xmecap: Meme caption generation with sub-image adaptability

Y Chen, S Yan, Z Zhu, Z Li, Y Xiao - Proceedings of the 32nd ACM …, 2024 - dl.acm.org
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge
for machines. While advances have been made in natural language processing, real-world …

Harnessing earnings reports for stock predictions: A qlora-enhanced llm approach

H Ni, S Meng, X Chen, Z Zhao, A Chen… - … Conference on Data …, 2024 - ieeexplore.ieee.org
Accurate stock market predictions following earnings reports are crucial for investors.
Traditional methods, particularly classical machine learning models, struggle with these …

Mapo: Boosting large language model performance with model-adaptive prompt optimization

Y Chen, Z Wen, G Fan, Z Chen, W Wu, D Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Prompt engineering, as an efficient and effective way to leverage Large Language Models
(LLM), has drawn a lot of attention from the research community. The existing research …

Hotvcom: Generating buzzworthy comments for videos

Y Chen, Y Qian, S Yan, J Jia, Z Li, Y Xiao, X Li… - arXiv preprint arXiv …, 2024 - arxiv.org
In the era of social media video platforms, popular``hot-comments''play a crucial role in
attracting user impressions of short-form videos, making them vital for marketing and …

Recent advancement of emotion cognition in large language models

Y Chen, Y Xiao - arXiv preprint arXiv:2409.13354, 2024 - arxiv.org
Emotion cognition in large language models (LLMs) is crucial for enhancing performance
across various applications, such as social media, human-computer interaction, and mental …

Parameter-efficient fine-tuning in large models: A survey of methodologies

L Wang, S Chen, L Jiang, S Pan, R Cai, S Yang… - arXiv preprint arXiv …, 2024 - arxiv.org
The large models, as predicted by scaling raw forecasts, have made groundbreaking
progress in many fields, particularly in natural language generation tasks, where they have …

CUPID: Improving Battle Fairness and Position Satisfaction in Online MOBA Games with a Re-matchmaking System

G Fan, C Zhang, K Wang, Y Li, J Chen… - Proceedings of the ACM on …, 2024 - dl.acm.org
The multiplayer online battle arena (MOBA) genre has gained significant popularity and
economic success, attracting considerable research interest within the Human-Computer …

Preserving text space integrity for robust compositional zero-shot learning via mixture of pretrained experts

Z Hao, F Liu, L Jiao, Y Du, S Li, H Wang, P Li, X Liu… - Neurocomputing, 2025 - Elsevier
In the current landscape of Compositional Zero-Shot Learning (CZSL) methods that
leverage CLIP, the predominant approach is based on prompt learning paradigms. These …