Abstract Machine learning models trained on private datasets have been shown to leak their private data. Recent work has found that the average data point is rarely leaked---it is often …
N Papernot - arXiv preprint arXiv:1811.01134, 2018 - arxiv.org
There is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community's understanding of the …
E Quiring, D Arp, K Rieck - 2018 IEEE European symposium on …, 2018 - ieeexplore.ieee.org
Machine learning is increasingly used in securitycritical applications, such as autonomous driving, face recognition, and malware detection. Most learning methods, however, have not …
Written by leading researchers, this complete introduction brings together all the theory and tools needed for building robust machine learning in adversarial environments. Discover …
L Liu, Y Wang, G Liu, K Peng… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Machine learning (ML) has achieved huge success in recent years, but is also vulnerable to various attacks. In this article, we concentrate on membership inference attacks and propose …
This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of …
Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is …
J Ye, A Maddi, SK Murakonda… - Proceedings of the …, 2022 - dl.acm.org
How much does a machine learning algorithm leak about its training data, and why? Membership inference attacks are used as an auditing tool to quantify this leakage. In this …
Given access to a machine learning model, can an adversary reconstruct the model's training data? This work studies this question from the lens of a powerful informed adversary …