The bias amplification paradox in text-to-image generation

P Seshadri, S Singh, Y Elazar - arXiv preprint arXiv:2308.00755, 2023 - arxiv.org
Bias amplification is a phenomenon in which models increase imbalances present in the
training data. In this paper, we study bias amplification in the text-to-image domain using …

Fairness in Autonomous Driving: Towards Understanding Confounding Factors in Object Detection under Challenging Weather

B Pathiraja, C Liu, R Senanayake - arXiv preprint arXiv:2406.00219, 2024 - arxiv.org
The deployment of autonomous vehicles (AVs) is rapidly expanding to numerous cities. At
the heart of AVs, the object detection module assumes a paramount role, directly influencing …

Leveraging CLIP for Inferring Sensitive Information and Improving Model Fairness

M Zhang, R Chunara - arXiv preprint arXiv:2403.10624, 2024 - arxiv.org
Performance disparities across sub-populations are known to exist in deep learning-based
vision recognition models, but previous work has largely addressed such fairness concerns …

[PDF][PDF] Robust Machine Learning: Detection, Evaluation and Adaptation Under Distribution Shift

S Garg - 2024 - kilthub.cmu.edu
Deep learning, despite its broad applicability, grapples with robustness challenges in real-
world applications, especially when training and test distributions differ. Reasons for the …

Prompting for Robustness: Extracting Robust Classifiers from Foundation Models

A Setlur, S Garg, V Smith, S Levine - ICLR 2024 Workshop on Reliable and … - openreview.net
Machine learning models can fail when trained on distributions with hidden confounders
(spuriously correlated with the label) and tested on distributions where such correlations are …