Deep models under the GAN: information leakage from collaborative deep learning

B Hitaj, G Ateniese, F Perez-Cruz - … of the 2017 ACM SIGSAC conference …, 2017 - dl.acm.org
Deep Learning has recently become hugely popular in machine learning for its ability to
solve end-to-end learning systems, in which the features and the classifiers are learned …

Differentially private data generative models

Q Chen, C Xiang, M Xue, B Li, N Borisov… - arXiv preprint arXiv …, 2018 - arxiv.org
Deep neural networks (DNNs) have recently been widely adopted in various applications,
and such success is largely due to a combination of algorithmic breakthroughs, computation …

Can we use split learning on 1D CNN models for privacy preserving training?

S Abuadbba, K Kim, M Kim, C Thapa… - Proceedings of the 15th …, 2020 - dl.acm.org
A new collaborative learning, called split learning, was recently introduced, aiming to protect
user data privacy without revealing raw input data to a server. It collaboratively runs a deep …

[PDF][PDF] Membership inference attack against differentially private deep learning model.

MA Rahman, T Rahman, R Laganière, N Mohammed… - Trans. Data Priv., 2018 - tdp.cat
The unprecedented success of deep learning is largely dependent on the availability of
massive amount of training data. In many cases, these data are crowd-sourced and may …

Datalens: Scalable privacy preserving training via gradient compression and aggregation

B Wang, F Wu, Y Long, L Rimanic, C Zhang… - Proceedings of the 2021 …, 2021 - dl.acm.org
Recent success of deep neural networks (DNNs) hinges on the availability of large-scale
dataset; however, training on such dataset often poses privacy risks for sensitive training …

Differential privacy in deep learning: an overview

T Ha, TK Dang, TT Dang, TA Truong… - 2019 International …, 2019 - ieeexplore.ieee.org
Nowadays, deep learning has many applications in our daily life such as self-driving,
product recommendation, advertisements and healthcare. In the training phase, deep …

Model inversion attacks against collaborative inference

Z He, T Zhang, RB Lee - Proceedings of the 35th Annual Computer …, 2019 - dl.acm.org
The prevalence of deep learning has drawn attention to the privacy protection of sensitive
data. Various privacy threats have been presented, where an adversary can steal model …

Privacy-preserving deep learning

R Shokri, V Shmatikov - Proceedings of the 22nd ACM SIGSAC …, 2015 - dl.acm.org
Deep learning based on artificial neural networks is a very popular approach to modeling,
classifying, and recognizing complex data such as images, speech, and text. The …

GANobfuscator: Mitigating information leakage under GAN via differential privacy

C Xu, J Ren, D Zhang, Y Zhang, Z Qin… - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
By learning generative models of semantic-rich data distributions from samples, generative
adversarial network (GAN) has recently attracted intensive research interests due to its …

Heterogeneous gaussian mechanism: Preserving differential privacy in deep learning with provable robustness

NH Phan, M Vu, Y Liu, R Jin, D Dou, X Wu… - arXiv preprint arXiv …, 2019 - arxiv.org
In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve
differential privacy in deep neural networks, with provable robustness against adversarial …