Fairness via Adversarial Attribute Neighbourhood Robust Learning

Q Qi, S Ardeshir, Y Xu, T Yang - arXiv preprint arXiv:2210.06630, 2022 - arxiv.org
Improving fairness between privileged and less-privileged sensitive attribute groups
(eg,{race, gender}) has attracted lots of attention. To enhance the model performs uniformly …

Attrleaks on the edge: Exploiting information leakage from privacy-preserving co-inference

Z Wang, K Liu, J Hu, J Ren, H Guo… - Chinese Journal of …, 2023 - ieeexplore.ieee.org
Collaborative inference (co-inference) accelerates deep neural network inference via
extracting representations at the device and making predictions at the edge server, which …

RecUP-FL: Reconciling Utility and Privacy in Federated learning via User-configurable Privacy Defense

Y Cui, SIA Meerza, Z Li, L Liu, J Zhang… - Proceedings of the 2023 …, 2023 - dl.acm.org
Federated learning (FL) provides a variety of privacy advantages by allowing clients to
collaboratively train a model without sharing their private data. However, recent studies have …

Differentially private optimizers can learn adversarially robust models

Z Bu, Y Zhang - Transactions on Machine Learning Research, 2023 - openreview.net
Machine learning models have shone in a variety of domains and attracted increasing
attention from both the security and the privacy communities. One important yet worrying …

Use the spear as a shield: An adversarial example based privacy-preserving technique against membership inference attacks

M Xue, C Yuan, C He, Y Wu, Z Wu… - … on Emerging Topics …, 2022 - ieeexplore.ieee.org
Recent researches demonstrate that deep learning models are vulnerable to membership
inference attacks. Few defenses have been proposed, but suffer from compromising the …

Privacy in deep learning: A survey

F Mireshghallah, M Taram, P Vepakomma… - arXiv preprint arXiv …, 2020 - arxiv.org
The ever-growing advances of deep learning in many areas including vision,
recommendation systems, natural language processing, etc., have led to the adoption of …

PAR-GAN: improving the generalization of generative adversarial networks against membership inference attacks

J Chen, WH Wang, H Gao, X Shi - Proceedings of the 27th ACM SIGKDD …, 2021 - dl.acm.org
Recent works have shown that Generative Adversarial Networks (GANs) may generalize
poorly and thus are vulnerable to privacy attacks. In this paper, we seek to improve the …

Information-Theoretic Bounds on The Removal of Attribute-Specific Bias From Neural Networks

J Li, M Khayatkhoei, J Zhu, H Xie, ME Hussein… - arXiv preprint arXiv …, 2023 - arxiv.org
Ensuring a neural network is not relying on protected attributes (eg, race, sex, age) for
predictions is crucial in advancing fair and trustworthy AI. While several promising methods …

A framework for understanding model extraction attack and defense

X Xian, M Hong, J Ding - arXiv preprint arXiv:2206.11480, 2022 - arxiv.org
The privacy of machine learning models has become a significant concern in many
emerging Machine-Learning-as-a-Service applications, where prediction services based on …

From gradient leakage to adversarial attacks in federated learning

JQ Lim, CS Chan - 2021 IEEE International Conference on …, 2021 - ieeexplore.ieee.org
Deep neural networks (DNN) are widely used in real-life applications despite the lack of
understanding on this technology and its challenges. Data privacy is one of the bottlenecks …