Output regeneration defense against membership inference attacks for protecting data privacy

Y Ding, P Huang, H Liang, F Yuan… - International Journal of …, 2023 - emerald.com
Purpose Recently, deep learning (DL) has been widely applied in various aspects of human
endeavors. However, studies have shown that DL models may also be a primary cause of …

Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks

S Ben Hamida, H Mrabet, F Chaieb, A Jemai - Multimedia Tools and …, 2024 - Springer
Abstract Machine learning (ML) has revolutionized various industries, but concerns about
privacy and security have emerged as significant challenges. Membership inference attacks …

Defending against membership inference attacks with high utility by GAN

L Hu, J Li, G Lin, S Peng, Z Zhang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
The success of machine learning (ML) depends on the availability of large-scale datasets.
However, recent studies have shown that models trained on such datasets are vulnerable to …

Defending against Membership Inference Attack by Shielding Membership Signals

Y Miao, Y Yu, X Li, Y Guo, X Liu… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
Member Inference Attack (MIA) is a key measure for evaluating privacy leakage in Machine
Learning (ML) models, aiming to distinguish private members from non-members by training …

On the privacy effect of data enhancement via the lens of memorization

X Li, Q Li, Z Hu, X Hu - IEEE Transactions on Information …, 2024 - ieeexplore.ieee.org
Machine learning poses severe privacy concerns as it has been shown that the learned
models can reveal sensitive information about their training data. Many works have …

When does data augmentation help with membership inference attacks?

Y Kaya, T Dumitras - International conference on machine …, 2021 - proceedings.mlr.press
Deep learning models often raise privacy concerns as they leak information about their
training data. This leakage enables membership inference attacks (MIA) that can identify …

Membership reconstruction attack in deep neural networks

Y Long, Z Ying, H Yan, R Fang, X Li, Y Wang, Z Pan - Information Sciences, 2023 - Elsevier
To further enhance the reliability of Machine Learning (ML) systems, considerable efforts
have been dedicated to developing privacy protection techniques. Recently, membership …

Use the spear as a shield: An adversarial example based privacy-preserving technique against membership inference attacks

M Xue, C Yuan, C He, Y Wu, Z Wu… - … on Emerging Topics …, 2022 - ieeexplore.ieee.org
Recent researches demonstrate that deep learning models are vulnerable to membership
inference attacks. Few defenses have been proposed, but suffer from compromising the …

Defending against membership inference attacks: RM Learning is all you need

Z Zhang, J Ma, X Ma, R Yang, X Wang, J Zhang - Information Sciences, 2024 - Elsevier
Large-capacity machine learning models are vulnerable to membership inference attacks
that disclose the privacy of the training dataset. The privacy concerns posed by membership …

KD‐GAN: An effective membership inference attacks defence framework

Z Zhang, G Lin, L Ke, S Peng, L Hu… - International Journal of …, 2022 - Wiley Online Library
Over the past few years, a variety of membership inference attacks against deep learning
models have emerged, raising significant privacy concerns. These attacks can easily infer …