Defenses to membership inference attacks: A survey

L Hu, A Yan, H Yan, J Li, T Huang, Y Zhang… - ACM Computing …, 2023 - dl.acm.org
Machine learning (ML) has gained widespread adoption in a variety of fields, including
computer vision and natural language processing. However, ML models are vulnerable to …

Membership inference attacks against language models via neighbourhood comparison

J Mattern, F Mireshghallah, Z Jin, B Schölkopf… - arXiv preprint arXiv …, 2023 - arxiv.org
Membership Inference attacks (MIAs) aim to predict whether a data sample was present in
the training data of a machine learning model or not, and are widely used for assessing the …

Unraveling Attacks to Machine Learning-Based IoT Systems: A Survey and the Open Libraries Behind Them

C Liu, B Chen, W Shao, C Zhang… - IEEE Internet of …, 2024 - ieeexplore.ieee.org
The advent of the Internet of Things (IoT) has brought forth an era of unprecedented
connectivity, with an estimated 80 billion smart devices expected to be in operation by the …

Membership inference attacks against text-to-image generation models

Y Wu, N Yu, Z Li, M Backes, Y Zhang - 2022 - openreview.net
Text-to-image generation models have recently attracted unprecedented attention as they
unlatch imaginative applications in all areas of life. However, developing such models …

Loss and Likelihood Based Membership Inference of Diffusion Models

H Hu, J Pang - International Conference on Information Security, 2023 - Springer
Recent years have witnessed the tremendous success of diffusion models in data synthesis.
However, when diffusion models are applied to sensitive data, they also give rise to severe …

" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences

D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …

Learning to unlearn for robust machine unlearning

MH Huang, LG Foo, J Liu - European Conference on Computer Vision, 2025 - Springer
Abstract Machine unlearning (MU) seeks to remove knowledge of specific data samples
from trained models without the necessity for complete retraining, a task made challenging …

TEAR: Exploring temporal evolution of adversarial robustness for membership inference attacks against federated learning

G Liu, Z Tian, J Chen, C Wang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Federated learning (FL) is a privacy-preserving machine learning paradigm that enables
multiple clients to train a unified model without disclosing their private data. However …

Practical membership inference attacks against fine-tuned large language models via self-prompt calibration

W Fu, H Wang, C Gao, G Liu, Y Li, T Jiang - arXiv preprint arXiv …, 2023 - arxiv.org
Membership Inference Attacks (MIA) aim to infer whether a target data record has been
utilized for model training or not. Prior attempts have quantified the privacy risks of language …

[PDF][PDF] Please tell me more: Privacy impact of explainability through the lens of membership inference attack

H Liu, Y Wu, Z Yu, N Zhang - 2024 IEEE Symposium on Security and …, 2024 - sites.wustl.edu
Explainability is increasingly recognized as an enabling technology for the broader adoption
of machine learning (ML), particularly for safety-critical applications. This has given rise to …