Dynamic learning systems subject to selective labeling exhibit censoring, ie persistent negative predictions assigned to one or more subgroups of points. In applications like …
Neglecting the effect that decisions have on individuals (and thus, on the underlying data distribution) when designing algorithmic decision-making policies may increase inequalities …
A Majumdar, I Valera - The 2024 ACM Conference on Fairness …, 2024 - dl.acm.org
Algorithms are increasingly used to automate large-scale decision-making processes, eg, online platforms that make instant decisions in lending, hiring, and education. When such …
As automated decision making and decision assistance systems become common in everyday life, research on the prevention or mitigation of potential harms that arise from …
T Leemann, M Pawelczyk, CT Eberle… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
We examine machine learning models in a setup where individuals have the choice to share optional personal information with a decision-making system, as seen in modern insurance …
In many predictive contexts (eg, credit lending), true outcomes are only observed for samples that were positively classified in the past. These past observations, in turn, form …
Missing values in real-world data pose a significant and unique challenge to algorithmic fairness. Different demographic groups may be unequally affected by missing data, and …
Fairness in software systems aims to provide algorithms that operate in a nondiscriminatory manner, with respect to protected attributes such as gender, race, or age. Ensuring fairness …
In today's world, where humans heavily rely on intelligent systems for everyday decisions, data and algorithmic biases have become a critical concern. From trivial cases like TV show …