On the (in) feasibility of attribute inference attacks on machine learning models

BZH Zhao, A Agrawal, C Coburn… - 2021 IEEE European …, 2021 - ieeexplore.ieee.org
With an increase in low-cost machine learning APIs, advanced machine learning models
may be trained on private datasets and monetized by providing them as a service. However …

Are attribute inference attacks just imputation?

B Jayaraman, D Evans - Proceedings of the 2022 ACM SIGSAC …, 2022 - dl.acm.org
Models can expose sensitive information about their training data. In an attribute inference
attack, an adversary has partial knowledge of some training records and access to a model …

Privacy risk in machine learning: Analyzing the connection to overfitting

S Yeom, I Giacomelli, M Fredrikson… - 2018 IEEE 31st …, 2018 - ieeexplore.ieee.org
Machine learning algorithms, when applied to sensitive data, pose a distinct threat to
privacy. A growing body of prior work demonstrates that models produced by these …

[HTML][HTML] Are your sensitive attributes private? novel model inversion attribute inference attacks on classification models

S Mehnaz, SV Dibbo, R De Viti, E Kabir… - 31st USENIX Security …, 2022 - usenix.org
USENIX Security '22 Technical Sessions | USENIX Sign In Conferences Attend Registration
Information Registration Discounts Terms and Conditions Grant Opportunities Venue, Hotel …

Enhanced membership inference attacks against machine learning models

J Ye, A Maddi, SK Murakonda… - Proceedings of the …, 2022 - dl.acm.org
How much does a machine learning algorithm leak about its training data, and why?
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …

Systematic evaluation of privacy risks of machine learning models

L Song, P Mittal - 30th USENIX Security Symposium (USENIX Security …, 2021 - usenix.org
Machine learning models are prone to memorizing sensitive data, making them vulnerable
to membership inference attacks in which an adversary aims to guess if an input sample was …

Membership inference attacks against machine learning models

R Shokri, M Stronati, C Song… - 2017 IEEE symposium …, 2017 - ieeexplore.ieee.org
We quantitatively investigate how machine learning models leak information about the
individual data records on which they were trained. We focus on the basic membership …

A pragmatic approach to membership inferences on machine learning models

Y Long, L Wang, D Bu, V Bindschaedler… - 2020 IEEE European …, 2020 - ieeexplore.ieee.org
Membership Inference Attacks (MIAs) aim to determine the presence of a record in a
machine learning model's training data by querying the model. Recent work has …

Sampling attacks: Amplification of membership inference attacks by repeated queries

S Rahimian, T Orekondy, M Fritz - arXiv preprint arXiv:2009.00395, 2020 - arxiv.org
Machine learning models have been shown to leak information violating the privacy of their
training set. We focus on membership inference attacks on machine learning models which …

Ml privacy meter: Aiding regulatory compliance by quantifying the privacy risks of machine learning

SK Murakonda, R Shokri - arXiv preprint arXiv:2007.09339, 2020 - arxiv.org
When building machine learning models using sensitive data, organizations should ensure
that the data processed in such systems is adequately protected. For projects involving …