Fr-train: A mutual information-based approach to fair and robust training

Y Roh, K Lee, S Whang, C Suh - … Conference on Machine …, 2020 - proceedings.mlr.press
Trustworthy AI is a critical issue in machine learning where, in addition to training a model
that is accurate, one must consider both fair and robust training in the presence of data bias …

Sample selection for fair and robust training

Y Roh, K Lee, S Whang, C Suh - Advances in Neural …, 2021 - proceedings.neurips.cc
Fairness and robustness are critical elements of Trustworthy AI that need to be addressed
together. Fairness is about learning an unbiased model while robustness is about learning …

On adversarial bias and the robustness of fair machine learning

H Chang, TD Nguyen, SK Murakonda… - arXiv preprint arXiv …, 2020 - arxiv.org
Optimizing prediction accuracy can come at the expense of fairness. Towards minimizing
discrimination against a group, fair machine learning algorithms strive to equalize the …

Can we obtain fairness for free?

R Islam, S Pan, JR Foulds - Proceedings of the 2021 AAAI/ACM …, 2021 - dl.acm.org
There is growing awareness that AI and machine learning systems can in some cases learn
to behave in unfair and discriminatory ways with harmful consequences. However, despite …

Explainability for fair machine learning

T Begley, T Schwedes, C Frye, I Feige - arXiv preprint arXiv:2010.07389, 2020 - arxiv.org
As the decisions made or influenced by machine learning models increasingly impact our
lives, it is crucial to detect, understand, and mitigate unfairness. But even simply determining …

Fairness warnings and Fair-MAML: learning fairly with minimal data

D Slack, SA Friedler, E Givental - … of the 2020 Conference on Fairness …, 2020 - dl.acm.org
Motivated by concerns surrounding the fairness effects of sharing and transferring fair
machine learning tools, we propose two algorithms: Fairness Warnings and Fair-MAML. The …

Improved adversarial learning for fair classification

LE Celis, V Keswani - arXiv preprint arXiv:1901.10443, 2019 - arxiv.org
Motivated by concerns that machine learning algorithms may introduce significant bias in
classification models, developing fair classifiers has become an important problem in …

Poisoning attacks on fair machine learning

MH Van, W Du, X Wu, A Lu - International Conference on Database …, 2022 - Springer
Both fair machine learning and adversarial learning have been extensively studied.
However, attacking fair machine learning models has received less attention. In this paper …

Fairness without demographics through adversarially reweighted learning

P Lahoti, A Beutel, J Chen, K Lee… - Advances in neural …, 2020 - proceedings.neurips.cc
Much of the previous machine learning (ML) fairness literature assumes that protected
features such as race and sex are present in the dataset, and relies upon them to mitigate …

Sensei: Sensitive set invariance for enforcing individual fairness

M Yurochkin, Y Sun - arXiv preprint arXiv:2006.14168, 2020 - arxiv.org
In this paper, we cast fair machine learning as invariant machine learning. We first formulate
a version of individual fairness that enforces invariance on certain sensitive sets. We then …