Fairness in machine learning: A survey

S Caton, C Haas - ACM Computing Surveys, 2024 - dl.acm.org
When Machine Learning technologies are used in contexts that affect citizens, companies as
well as researchers need to be confident that there will not be any unexpected social …

[PDF][PDF] Survey on sociodemographic bias in natural language processing

V Gupta, PN Venkit, S Wilson… - arXiv preprint arXiv …, 2023 - researchgate.net
Deep neural networks often learn unintended bias during training, which might have harmful
effects when deployed in realworld settings. This work surveys 214 papers related to …

Unmasking nationality bias: A study of human perception of nationalities in ai-generated articles

P Narayanan Venkit, S Gautam… - Proceedings of the …, 2023 - dl.acm.org
We investigate the potential for nationality biases in natural language processing (NLP)
models using human evaluation methods. Biased NLP models can perpetuate stereotypes …

From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML

S Rismani, R Shelby, A Smart, E Jatho, J Kroll… - Proceedings of the …, 2023 - dl.acm.org
Inappropriate design and deployment of machine learning (ML) systems lead to negative
downstream social and ethical impacts–described here as social and ethical risks–for users …

Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development

S Rismani, R Shelby, A Smart, R Delos Santos… - Proceedings of the …, 2023 - dl.acm.org
Identifying potential social and ethical risks in emerging machine learning (ML) models and
their applications remains challenging. In this work, we applied two well-established safety …

FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication

E Slyman, S Lee, S Cohen… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Recent dataset deduplication techniques have demonstrated that content-aware dataset
pruning can dramatically reduce the cost of training Vision-Language Pretrained (VLP) …

Leveraging ontologies to document bias in data

M Russo, ME Vidal - arXiv preprint arXiv:2407.00509, 2024 - arxiv.org
Machine Learning (ML) systems are capable of reproducing and often amplifying undesired
biases. This puts emphasis on the importance of operating under practices that enable the …

From principles to practice: An accountability metrics catalogue for managing ai risks

B Xia, Q Lu, L Zhu, SU Lee, Y Liu, Z Xing - arXiv preprint arXiv:2311.13158, 2023 - arxiv.org
Artificial Intelligence (AI), particularly through the advent of large-scale generative AI (GenAI)
models such as Large Language Models (LLMs), has become a transformative element in …

To which reference class do you belong? Measuring racial fairness of reference classes with normative modeling

S Rutherford, T Wolfers, C Fraza, NG Harrnet… - arXiv preprint arXiv …, 2024 - arxiv.org
Reference classes in healthcare establish healthy norms, such as pediatric growth charts of
height and weight, and are used to chart deviations from these norms which represent …

A Multivocal Literature Review on Privacy and Fairness in Federated Learning

B Balbierer, L Heinlein, D Zipperling, N Kühl - arXiv preprint arXiv …, 2024 - arxiv.org
Federated Learning presents a way to revolutionize AI applications by eliminating the
necessity for data sharing. Yet, research has shown that information can still be extracted …