Reducing audio membership inference attack accuracy to chance: 4 defenses

M Lomnitz, N Lopatina, P Gamble… - arXiv preprint arXiv …, 2019 - arxiv.org
M Lomnitz, N Lopatina, P Gamble, Z Hampel-Arias, L Tindall, FA Mejia, MA Barrios
arXiv preprint arXiv:1911.01888, 2019arxiv.org
It is critical to understand the privacy and robustness vulnerabilities of machine learning
models, as their implementation expands in scope. In membership inference attacks,
adversaries can determine whether a particular set of data was used in training, putting the
privacy of the data at risk. Existing work has mostly focused on image related tasks; we
generalize this type of attack to speaker identification on audio samples. We demonstrate
attack precision of 85.9\% and recall of 90.8\% for LibriSpeech, and 78.3\% precision and …
It is critical to understand the privacy and robustness vulnerabilities of machine learning models, as their implementation expands in scope. In membership inference attacks, adversaries can determine whether a particular set of data was used in training, putting the privacy of the data at risk. Existing work has mostly focused on image related tasks; we generalize this type of attack to speaker identification on audio samples. We demonstrate attack precision of 85.9\% and recall of 90.8\% for LibriSpeech, and 78.3\% precision and 90.7\% recall for VOiCES (Voices Obscured in Complex Environmental Settings). We find that implementing defenses such as prediction obfuscation, defensive distillation or adversarial training, can reduce attack accuracy to chance.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References