Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness …
K Chaudhuri, D Hsu - … of the 24th Annual Conference on …, 2011 - proceedings.mlr.press
This work studies the problem of privacy-preserving classification–namely, learning a classifier from sensitive data while preserving the privacy of individuals in the training set. In …
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP) with respect to the labels of the training examples. We …
A critical concern in data-driven decision making is to build models whose outcomes do not discriminate against some demographic groups, including gender, ethnicity, or age. To …
The traditional notion of\emphgeneralization—ie, learning a hypothesis whose empirical error is close to its true error—is surprisingly brittle. As has recently been noted [Dwork et al …
LE Celis, A Mehrotra, N Vishnoi - Advances in Neural …, 2021 - proceedings.neurips.cc
We study fair classification in the presence of an omniscient adversary that, given an $\eta $, is allowed to choose an arbitrary $\eta $-fraction of the training samples and arbitrarily …
Because learning sometimes involves sensitive data, standard machine-learning algorithms have been extended to offer strong privacy guarantees for training data. However, in …
X Gu, Z Tianqing, J Li, T Zhang, W Ren, KKR Choo - Computers & Security, 2022 - Elsevier
As applications of machine learning become increasingly widespread, the need to ensure model accuracy and fairness while protecting the privacy of user data becomes more …
N Patel, R Shokri, Y Zick - Proceedings of the 2022 ACM Conference on …, 2022 - dl.acm.org
Using machine learning models in critical decision-making processes has given rise to a call for algorithmic transparency. Model explanations, however, might leak information about the …