It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA) …
Large LMs such as GPT-3 are powerful, but can commit mistakes that are obvious to humans. For example, GPT-3 would mistakenly interpret" What word is similar to good?" to …
H Devinney, J Björklund, H Björklund - … of the 2022 ACM conference on …, 2022 - dl.acm.org
The rise of concern around Natural Language Processing (NLP) technologies containing and perpetuating social biases has led to a rich and rapidly growing area of research …
Y Tal, I Magar, R Schwartz - arXiv preprint arXiv:2206.09860, 2022 - arxiv.org
The size of pretrained models is increasing, and so is their performance on a variety of NLP tasks. However, as their memorization capacity grows, they might pick up more social …
P Anantaprayoon, M Kaneko, N Okazaki - arXiv preprint arXiv:2309.09697, 2023 - arxiv.org
Discriminatory social biases, including gender biases, have been found in Pre-trained Language Models (PLMs). In Natural Language Inference (NLI), recent bias evaluation …
Common studies of gender bias in NLP focus either on extrinsic bias measured by model performance on a downstream task or on intrinsic bias found in models' internal …
Large language models trained on a mixture of NLP tasks that are converted into a text-to- text format using prompts, can generalize into novel forms of language and handle novel …
Much research has sought to evaluate the degree to which large language models reflect social biases. We complement such work with an approach to elucidating the connections …
Discrete prompts have been used for fine-tuning Pre-trained Language Models for diverse NLP tasks. In particular, automatic methods that generate discrete prompts from a small set …