Explainability for fair machine learning

T Begley, T Schwedes, C Frye, I Feige - arXiv preprint arXiv:2010.07389, 2020 - arxiv.org
As the decisions made or influenced by machine learning models increasingly impact our
lives, it is crucial to detect, understand, and mitigate unfairness. But even simply determining …

Privacy side channels in machine learning systems

E Debenedetti, G Severi, N Carlini… - arXiv preprint arXiv …, 2023 - arxiv.org
Most current approaches for protecting privacy in machine learning (ML) assume that
models exist in a vacuum, when in reality, ML models are part of larger systems that include …

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

J Wu, AA Ghomi, D Glukhov, JC Cresswell… - arXiv preprint arXiv …, 2023 - arxiv.org
Machine learning models are susceptible to a variety of attacks that can erode trust in their
deployment. These threats include attacks against the privacy of training data and …

An empirical study on the intrinsic privacy of SGD

SL Hyland, S Tople - arXiv preprint arXiv:1912.02919, 2019 - arxiv.org
Introducing noise in the training of machine learning systems is a powerful way to protect
individual privacy via differential privacy guarantees, but comes at a cost to utility. This work …

How to DP-fy ML: A Practical Tutorial to Machine Learning with Differential Privacy

N Ponomareva, S Vassilvitskii, Z Xu… - Proceedings of the 29th …, 2023 - dl.acm.org
Machine Learning (ML) models are ubiquitous in real world applications and are a constant
focus of research. At the same time, the community has started to realize the importance of …

Differentially private optimization on large model at small cost

Z Bu, YX Wang, S Zha… - … Conference on Machine …, 2023 - proceedings.mlr.press
Differentially private (DP) optimization is the standard paradigm to learn large neural
networks that are accurate and privacy-preserving. The computational cost for DP deep …

Private knowledge transfer via model distillation with generative adversarial networks

D Gao, C Zhuo - arXiv preprint arXiv:2004.04631, 2020 - arxiv.org
The deployment of deep learning applications has to address the growing privacy concerns
when using private and sensitive data for training. A conventional deep learning model is …

Scalable privacy-preserving distributed learning

D Froelicher, JR Troncoso-Pastoriza, A Pyrgelis… - arXiv preprint arXiv …, 2020 - arxiv.org
In this paper, we address the problem of privacy-preserving distributed learning and the
evaluation of machine-learning models by analyzing it in the widespread MapReduce …

On the impact of multi-dimensional local differential privacy on fairness

K Makhlouf, HH Arcolezi, S Zhioua, GB Brahim… - Data Mining and …, 2024 - Springer
Automated decision systems are increasingly used to make consequential decisions in
people's lives. Due to the sensitivity of the manipulated data and the resulting decisions …

Exploring the unfairness of DP-SGD across settings

F Noe, R Herskind, A Søgaard - arXiv preprint arXiv:2202.12058, 2022 - arxiv.org
End users and regulators require private and fair artificial intelligence models, but previous
work suggests these objectives may be at odds. We use the CivilComments to evaluate the …