Exploiting fairness to enhance sensitive attributes reconstruction

J Ferry, U Aïvodji, S Gambs… - 2023 IEEE Conference …, 2023 - ieeexplore.ieee.org
In recent years, a growing body of work has emerged on how to learn machine learning
models under fairness constraints, often expressed with respect to some sensitive attributes …

Learning fair representations through uniformly distributed sensitive attributes

PJ Kenfack, AR Rivera, AM Khan… - 2023 IEEE Conference …, 2023 - ieeexplore.ieee.org
Machine Learning (ML) models trained on biased data can reproduce and even amplify
these biases. Since such models are deployed to make decisions that can affect people's …

Achieving Fairness through Separability: A Unified Framework for Fair Representation Learning

T Jang, H Gao, P Shi, X Wang - International Conference on …, 2024 - proceedings.mlr.press
Fairness is a growing concern in machine learning as state-of-the-art models may amplify
social prejudice by making biased predictions against specific demographics such as race …

Learning fair models without sensitive attributes: A generative approach

H Zhu, E Dai, H Liu, S Wang - Neurocomputing, 2023 - Elsevier
Most existing fair classifiers rely on sensitive attributes to achieve fairness. However, for
many scenarios, we cannot obtain sensitive attributes due to privacy and legal issues. The …

Towards fair classifiers without sensitive attributes: Exploring biases in related features

T Zhao, E Dai, K Shu, S Wang - … Conference on Web Search and Data …, 2022 - dl.acm.org
Despite the rapid development and great success of machine learning models, extensive
studies have exposed their disadvantage of inheriting latent discrimination and societal bias …

Estimating and implementing conventional fairness metrics with probabilistic protected features

H Elzayn, E Black, P Vossler, N Jo… - … IEEE Conference on …, 2024 - ieeexplore.ieee.org
The vast majority of techniques to train fair models require access to the protected attribute
(eg, race, gender), either at train time or in production. However, in many practically …

Mmd-b-fair: Learning fair representations with statistical testing

N Deka, DJ Sutherland - International Conference on …, 2023 - proceedings.mlr.press
We introduce a method, MMD-B-Fair, to learn fair representations of data via kernel two-
sample testing. We find neural features of our data where a maximum mean discrepancy …

[HTML][HTML] Fair classification via domain adaptation: A dual adversarial learning approach

Y Liang, C Chen, T Tian, K Shu - Frontiers in Big Data, 2023 - frontiersin.org
Modern machine learning (ML) models are becoming increasingly popular and are widely
used in decision-making systems. However, studies have shown critical issues of ML …

Unified fairness from data to learning algorithm

Y Zhang, L Luo, H Huang - 2021 IEEE International …, 2021 - ieeexplore.ieee.org
In classification problems, individual fairness prevents discrimination against individuals
based on protected attributes. Fairness-aware methods usually consist of two stages, first …

FairNN - Conjoint Learning of Fair Representations for Fair Decisions

T Hu, V Iosifidis, W Liao, H Zhang, MY Yang… - Discovery Science: 23rd …, 2020 - Springer
In this paper, we propose FairNN a neural network that performs joint feature representation
and classification for fairness-aware learning. Our approach optimizes a multi-objective loss …