Infocensor: an information-theoretic framework against sensitive attribute inference and demographic disparity

T Zheng, B Li - Proceedings of the 2022 ACM on Asia Conference on …, 2022 - dl.acm.org
Deep learning sits at the forefront of many on-going advances in a variety of learning tasks.
Despite its supremacy in accuracy under benign environments, Deep learning suffers from …

Quantifying and mitigating privacy risks of contrastive learning

X He, Y Zhang - Proceedings of the 2021 ACM SIGSAC Conference on …, 2021 - dl.acm.org
Data is the key factor to drive the development of machine learning (ML) during the past
decade. However, high-quality data, in particular labeled data, is often hard and expensive …

Membership inference attacks against adversarially robust deep learning models

L Song, R Shokri, P Mittal - 2019 IEEE Security and Privacy …, 2019 - ieeexplore.ieee.org
In recent years, the research community has increasingly focused on understanding the
security and privacy challenges posed by deep learning models. However, the security …

Interpretable complex-valued neural networks for privacy protection

L Xiang, H Ma, H Zhang, Y Zhang, J Ren… - arXiv preprint arXiv …, 2019 - arxiv.org
Previous studies have found that an adversary attacker can often infer unintended input
information from intermediate-layer features. We study the possibility of preventing such …

Interpreting disparate privacy-utility tradeoff in adversarial learning via attribute correlation

L Zhang, Y Chen, A Li, B Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Adversarial learning is commonly used to extract latent data representations which are
expressive to predict the target attribute but indistinguishable in the privacy attribute …

Leveraging algorithmic fairness to mitigate blackbox attribute inference attacks

J Aalmoes, V Duddu, A Boutet - arXiv preprint arXiv:2211.10209, 2022 - arxiv.org
Machine learning (ML) models have been deployed for high-stakes applications, eg,
healthcare and criminal justice. Prior work has shown that ML models are vulnerable to …

Gradient masking and the underestimated robustness threats of differential privacy in deep learning

F Boenisch, P Sperl, K Böttinger - arXiv preprint arXiv:2105.07985, 2021 - arxiv.org
An important problem in deep learning is the privacy and security of neural networks (NNs).
Both aspects have long been considered separately. To date, it is still poorly understood …

Certified robustness to adversarial examples with differential privacy

M Lecuyer, V Atlidakis, R Geambasu… - … IEEE symposium on …, 2019 - ieeexplore.ieee.org
Adversarial examples that fool machine learning models, particularly deep neural networks,
have been a topic of intense research interest, with attacks and defenses being developed …

Heterogeneous gaussian mechanism: Preserving differential privacy in deep learning with provable robustness

NH Phan, M Vu, Y Liu, R Jin, D Dou, X Wu… - arXiv preprint arXiv …, 2019 - arxiv.org
In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve
differential privacy in deep neural networks, with provable robustness against adversarial …

Alrs: An adversarial noise based privacy-preserving data sharing mechanism

J Chen, R Deng, H Chen, N Ruan, Y Liu, C Liu… - Information Security and …, 2021 - Springer
Deep learning is data-hungry, and generally its performance highly depends on the amount
of training data. Multiple parties can obtain better models by sharing their data and train …