Lamp: When large language models meet personalization

A Salemi, S Mysore, M Bendersky, H Zamani - arXiv preprint arXiv …, 2023 - arxiv.org
This paper highlights the importance of personalization in large language models and
introduces the LaMP benchmark--a novel benchmark for training and evaluating language …

Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning

K Bu, Y Liu, X Ju - Knowledge-Based Systems, 2023 - Elsevier
Sentiment analysis is one of the traditional well-known tasks in Natural Language
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …

ProQA: Structural prompt-based pre-training for unified question answering

W Zhong, Y Gao, N Ding, Y Qin, Z Liu, M Zhou… - arXiv preprint arXiv …, 2022 - arxiv.org
Question Answering (QA) is a longstanding challenge in natural language processing.
Existing QA works mostly focus on specific question types, knowledge domains, or …

Useridentifier: implicit user representations for simple and effective personalized sentiment analysis

F Mireshghallah, V Shrivastava, M Shokouhi… - arXiv preprint arXiv …, 2021 - arxiv.org
Global models are trained to be as generalizable as possible, with user invariance
considered desirable since the models are shared across multitudes of users. As such …

[HTML][HTML] Improving task generalization via unified schema prompt

W Zhong, Y Gao, N Ding, Z Liu, M Zhou, J Wang, J Yin… - AI Open, 2023 - Elsevier
Task generalization has been a long-standing challenge in Natural Language Processing
(NLP). Recent research attempts to improve the task generalization ability of pre-trained …

Differential dataset cartography: Explainable artificial intelligence in comparative personalized sentiment analysis

J Kocoń, J Baran, K Kanclerz, M Kajstura… - International Conference …, 2023 - Springer
Data Maps is an interesting method of graphical representation of datasets, which allows
observing the model's behaviour for individual instances in the learning process (training …

Personalized LoRA for Human-Centered Text Understanding

Y Zhang, J Wang, LC Yu, D Xu, X Zhang - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Effectively and efficiently adapting a pre-trained language model (PLM) for human-centered
text understanding (HCTU) is challenging since user tokens are million-level in most …

Large human language models: A need and the challenges

N Soni, HA Schwartz, J Sedoc… - arXiv preprint arXiv …, 2023 - arxiv.org
As research in human-centered NLP advances, there is a growing recognition of the
importance of incorporating human and social factors into NLP models. At the same time …

List: Lite prompted self-training makes parameter-efficient few-shot learners

Y Wang, S Mukherjee, X Liu, J Gao… - arXiv preprint arXiv …, 2021 - arxiv.org
We present a new method LiST is short for Lite Prompted Self-Training for parameter-
efficient fine-tuning of large pre-trained language models (PLMs) for few-shot learning. LiST …

Learning User Embeddings from Human Gaze for Personalised Saliency Prediction

F Strohm, M Bâce, A Bulling - Proceedings of the ACM on Human …, 2024 - dl.acm.org
Reusable embeddings of user behaviour have shown significant performance
improvements for the personalised saliency prediction task. However, prior works require …