Measuring social biases in grounded vision and language embeddings

C Ross, B Katz, A Barbu - arXiv preprint arXiv:2002.08911, 2020 - arxiv.org
We generalize the notion of social biases from language embeddings to grounded vision
and language embeddings. Biases are present in grounded embeddings, and indeed seem …

Survey of social bias in vision-language models

N Lee, Y Bang, H Lovenia, S Cahyawijaya… - arXiv preprint arXiv …, 2023 - arxiv.org
In recent years, the rapid advancement of machine learning (ML) models, particularly
transformer-based pre-trained models, has revolutionized Natural Language Processing …

Debiasing vision-language models via biased prompts

CY Chuang, V Jampani, Y Li, A Torralba… - arXiv preprint arXiv …, 2023 - arxiv.org
Machine learning models have been shown to inherit biases from their training datasets.
This can be particularly problematic for vision-language foundation models trained on …

Worst of both worlds: Biases compound in pre-trained vision-and-language models

T Srinivasan, Y Bisk - arXiv preprint arXiv:2104.08666, 2021 - arxiv.org
Numerous works have analyzed biases in vision and pre-trained language models
individually-however, less attention has been paid to how these biases interact in …

A prompt array keeps the bias away: Debiasing vision-language models with adversarial learning

H Berg, SM Hall, Y Bhalgat, W Yang, HR Kirk… - arXiv preprint arXiv …, 2022 - arxiv.org
Vision-language models can encode societal biases and stereotypes, but there are
challenges to measuring and mitigating these multimodal harms due to lacking …

Wordbias: An interactive visual tool for discovering intersectional biases encoded in word embeddings

B Ghai, MN Hoque, K Mueller - Extended Abstracts of the 2021 CHI …, 2021 - dl.acm.org
Intersectional bias is a bias caused by an overlap of multiple social factors like gender,
sexuality, race, disability, religion, etc. A recent study has shown that word embedding …

Bias in word embeddings

O Papakyriakopoulos, S Hegelich… - Proceedings of the …, 2020 - dl.acm.org
Word embeddings are a widely used set of natural language processing techniques that
map words to vectors of real numbers. These vectors are used to improve the quality of …

Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases

W Guo, A Caliskan - Proceedings of the 2021 AAAI/ACM Conference on …, 2021 - dl.acm.org
With the starting point that implicit human biases are reflected in the statistical regularities of
language, it is possible to measure biases in English static word embeddings. State-of-the …

Vico: Word embeddings from visual co-occurrences

T Gupta, A Schwing, D Hoiem - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
We propose to learn word embeddings from visual co-occurrences. Two words co-occur
visually if both words apply to the same image or image region. Specifically, we extract four …

OSCaR: Orthogonal subspace correction and rectification of biases in word embeddings

S Dev, T Li, JM Phillips, V Srikumar - arXiv preprint arXiv:2007.00049, 2020 - arxiv.org
Language representations are known to carry stereotypical biases and, as a result, lead to
biased predictions in downstream tasks. While existing methods are effective at mitigating …