Flip: A provable defense framework for backdoor mitigation in federated learning

K Zhang, G Tao, Q Xu, S Cheng, S An, Y Liu… - arXiv preprint arXiv …, 2022 - arxiv.org
Federated Learning (FL) is a distributed learning paradigm that enables different parties to
train a model together for high quality and strong privacy protection. In this scenario …

Reinforcement learning-based black-box model inversion attacks

G Han, J Choi, H Lee, J Kim - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Abstract Model inversion attacks are a type of privacy attack that reconstructs private data
used to train a machine learning model, solely by accessing the model. Recently, white-box …

" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences

D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …

Label-only model inversion attacks via knowledge transfer

BN Nguyen, K Chandrasegaran… - Advances in …, 2024 - proceedings.neurips.cc
In a model inversion (MI) attack, an adversary abuses access to a machine learning (ML)
model to infer and reconstruct private training data. Remarkable progress has been made in …

All Rivers Run to the Sea: Private Learning with Asymmetric Flows

Y Niu, RE Ali, S Prakash… - Proceedings of the …, 2024 - openaccess.thecvf.com
Data privacy is of great concern in cloud machine-learning service platforms when sensitive
data are exposed to service providers. While private computing environments (eg secure …

A gan-based defense framework against model inversion attacks

X Gong, Z Wang, S Li, Y Chen… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
With the development of deep learning, deep neural network (DNN)-based application have
become an indispensable aspect of daily life. However, recent studies have shown that …

Boosting model inversion attacks with adversarial examples

S Zhou, T Zhu, D Ye, X Yu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Model inversion attacks involve reconstructing the training data of a target model, which
raises serious privacy concerns for machine learning models. However, these attacks …

Purifier: Defending data inference attacks via transforming confidence scores

Z Yang, L Wang, D Yang, J Wan, Z Zhao… - Proceedings of the …, 2023 - ojs.aaai.org
Neural networks are susceptible to data inference attacks such as the membership inference
attack, the adversarial model inversion attack and the attribute inference attack, where the …

Privacy leakage on dnns: A survey of model inversion attacks and defenses

H Fang, Y Qiu, H Yu, W Yu, J Kong, B Chong… - arXiv preprint arXiv …, 2024 - arxiv.org
Model Inversion (MI) attacks aim to disclose private information about the training data by
abusing access to the pre-trained models. These attacks enable adversaries to reconstruct …

Model Inversion Robustness: Can Transfer Learning Help?

ST Ho, KJ Hao, K Chandrasegaran… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Model Inversion (MI) attacks aim to reconstruct private training data by abusing
access to machine learning models. Contemporary MI attacks have achieved impressive …