Evasion attacks against machine learning at test time

B Biggio, I Corona, D Maiorca, B Nelson… - Machine Learning and …, 2013 - Springer
In security-sensitive applications, the success of machine learning depends on a thorough
vetting of their resistance to adversarial data. In one pertinent, well-motivated attack …

The privacy onion effect: Memorization is relative

N Carlini, M Jagielski, C Zhang… - Advances in …, 2022 - proceedings.neurips.cc
Abstract Machine learning models trained on private datasets have been shown to leak their
private data. Recent work has found that the average data point is rarely leaked---it is often …

A marauder's map of security and privacy in machine learning

N Papernot - arXiv preprint arXiv:1811.01134, 2018 - arxiv.org
There is growing recognition that machine learning (ML) exposes new security and privacy
vulnerabilities in software systems, yet the technical community's understanding of the …

Forgotten siblings: Unifying attacks on machine learning and digital watermarking

E Quiring, D Arp, K Rieck - 2018 IEEE European symposium on …, 2018 - ieeexplore.ieee.org
Machine learning is increasingly used in securitycritical applications, such as autonomous
driving, face recognition, and malware detection. Most learning methods, however, have not …

[图书][B] Adversarial machine learning

AD Joseph, B Nelson, BIP Rubinstein, JD Tygar - 2018 - books.google.com
Written by leading researchers, this complete introduction brings together all the theory and
tools needed for building robust machine learning in adversarial environments. Discover …

Membership inference attacks against machine learning models via prediction sensitivity

L Liu, Y Wang, G Liu, K Peng… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Machine learning (ML) has achieved huge success in recent years, but is also vulnerable to
various attacks. In this article, we concentrate on membership inference attacks and propose …

Privacy-preserving neural representations of text

M Coavoux, S Narayan, SB Cohen - arXiv preprint arXiv:1808.09408, 2018 - arxiv.org
This article deals with adversarial attacks towards deep learning systems for Natural
Language Processing (NLP), in the context of privacy protection. We study a specific type of …

Sok: Security and privacy in machine learning

N Papernot, P McDaniel, A Sinha… - 2018 IEEE European …, 2018 - ieeexplore.ieee.org
Advances in machine learning (ML) in recent years have enabled a dizzying array of
applications such as data analytics, autonomous systems, and security diagnostics. ML is …

Enhanced membership inference attacks against machine learning models

J Ye, A Maddi, SK Murakonda… - Proceedings of the …, 2022 - dl.acm.org
How much does a machine learning algorithm leak about its training data, and why?
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …

Reconstructing training data with informed adversaries

B Balle, G Cherubin, J Hayes - 2022 IEEE Symposium on …, 2022 - ieeexplore.ieee.org
Given access to a machine learning model, can an adversary reconstruct the model's
training data? This work studies this question from the lens of a powerful informed adversary …