Evaluations of machine learning privacy defenses are misleading

M Aerni, J Zhang, F Tramèr - Proceedings of the 2024 on ACM SIGSAC …, 2024 - dl.acm.org
Empirical defenses for machine learning privacy forgo the provable guarantees of
differential privacy in the hope of achieving higher utility while resisting realistic adversaries …

Low-Cost High-Power Membership Inference Attacks

S Zarifzadeh, P Liu, R Shokri - Forty-first International Conference on …, 2024 - openreview.net
Membership inference attacks aim to detect if a particular data point was used in training a
model. We design a novel statistical test to perform robust membership inference attacks …

The 2010 Census Confidentiality Protections Failed, Here's How and Why

JM Abowd, T Adams, R Ashmead, D Darais, S Dey… - 2023 - nber.org
Using only 34 published tables, we reconstruct five variables (census block, sex, age, race,
and ethnicity) in the confidential 2010 Census person records. Using the 38-bin age variable …

Privacy Analyses in Machine Learning

J Ye - Proceedings of the 2024 on ACM SIGSAC Conference …, 2024 - dl.acm.org
Machine learning models sometimes memorize sensitive training data features, posing
privacy risks. To control such privacy risks, Dwork et al. proposed the definition of differential …

Provable Privacy Attacks on Trained Shallow Neural Networks

G Smorodinsky, G Vardi, I Safran - arXiv preprint arXiv:2410.07632, 2024 - arxiv.org
We study what provable privacy attacks can be shown on trained, 2-layer ReLU neural
networks. We explore two types of attacks; data reconstruction attacks, and membership …

Do Parameters Reveal More than Loss for Membership Inference?

A Suri, X Zhang, D Evans - arXiv preprint arXiv:2406.11544, 2024 - arxiv.org
Membership inference attacks aim to infer whether an individual record was used to train a
model, serving as a key tool for disclosure auditing. While such evaluations are useful to …

Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective

H Xiong, X Zhang, J Chen, X Sun, Y Li, Z Sun… - arXiv preprint arXiv …, 2024 - arxiv.org
Given the complexity and lack of transparency in deep neural networks (DNNs), extensive
efforts have been made to make these systems more interpretable or explain their behaviors …

WaKA: Data Attribution using K-Nearest Neighbors and Membership Privacy Principles

P Mesana, C Bénesse, H Lautraite, G Caporossi… - arXiv preprint arXiv …, 2024 - arxiv.org
In this paper, we introduce WaKA (Wasserstein K-nearest neighbors Attribution), a novel
attribution method that leverages principles from the LiRA (Likelihood Ratio Attack) …

CARSI II: A Context-Driven Intelligent User Interface

M Wiedner, SV Naveenachandran… - Adjunct Proceedings of …, 2024 - dl.acm.org
Modern automotive infotainment systems offer a complex and wide array of controls and
features through various interaction methods. However, such complexity can distract the …

[PDF][PDF] A Data-Centric Analysis of Membership Inference Attacks

H Ito, J Jälkö - 2024 - helda.helsinki.fi
A Data-Centric Analysis of Membership Inference Attacks Page 1 Master’s thesis Master’s
Programme in Data Science A Data-Centric Analysis of Membership Inference Attacks Hibiki Ito …