A survey on gender bias in natural language processing

K Stanczak, I Augenstein - arXiv preprint arXiv:2112.14168, 2021 - arxiv.org
Language can be used as a means of reproducing and enforcing harmful stereotypes and
biases and has been analysed as such in numerous research. In this paper, we present a …

Large language models as superpositions of cultural perspectives

G Kovač, M Sawayama, R Portelas, C Colas… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) are often misleadingly recognized as having a personality
or a set of values. We argue that an LLM can be seen as a superposition of perspectives …

Quantifying gender biases towards politicians on Reddit

S Marjanovic, K Stańczak, I Augenstein - PloS one, 2022 - journals.plos.org
Despite attempts to increase gender parity in politics, global efforts have struggled to ensure
equal female representation. This is likely tied to implicit gender biases against women in …

Crehate: Cross-cultural re-annotation of english hate speech dataset

N Lee, C Jung, J Myung, J Jin, J Kim, A Oh - arXiv preprint arXiv …, 2023 - arxiv.org
English datasets predominantly reflect the perspectives of certain nationalities, which can
lead to cultural biases in models and datasets. This is particularly problematic in tasks …

Revealing fine-grained values and opinions in large language models

D Wright, A Arora, N Borenstein, S Yadav… - arXiv preprint arXiv …, 2024 - arxiv.org
Uncovering latent values and opinions embedded in large language models (LLMs) can
help identify biases and mitigate potential harm. Recently, this has been approached by …

Measuring gender bias in West Slavic language models

S Martinková, K Stańczak, I Augenstein - arXiv preprint arXiv:2304.05783, 2023 - arxiv.org
Pre-trained language models have been known to perpetuate biases from the underlying
datasets to downstream tasks. However, these findings are predominantly based on …

Towards an enhanced understanding of bias in pre-trained neural language models: A survey with special emphasis on affective bias

K Anoop, MP Gangan, P Deepak, VL Lajish - Responsible Data Science …, 2022 - Springer
The remarkable progress in Natural Language Processing (NLP) brought about by deep
learning, particularly with the recent advent of large pre-trained neural language models, is …

Political-llm: Large language models in political science

L Li, J Li, C Chen, F Gui, H Yang, C Yu, Z Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
In recent years, large language models (LLMs) have been widely adopted in political
science tasks such as election prediction, sentiment analysis, policy impact assessment, and …

Evaluating cultural adaptability of a large language model via simulation of synthetic personas

L Kwok, M Bravansky, LD Griffin - arXiv preprint arXiv:2408.06929, 2024 - arxiv.org
The success of Large Language Models (LLMs) in multicultural environments hinges on
their ability to understand users' diverse cultural backgrounds. We measure this capability by …

Social bias probing: Fairness benchmarking for language models

MM Manerba, K Stańczak, R Guidotti… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models have been shown to encode a variety of social biases, which
carries the risk of downstream harms. While the impact of these biases has been …