Dawn: Dynamic adversarial watermarking of neural networks

S Szyller, BG Atli, S Marchal, N Asokan - Proceedings of the 29th ACM …, 2021 - dl.acm.org
Training machine learning (ML) models is expensive in terms of computational power,
amounts of labeled data and human expertise. Thus, ML models constitute business value …

Beyond value perturbation: Local differential privacy in the temporal setting

Q Ye, H Hu, N Li, X Meng, H Zheng… - IEEE INFOCOM 2021 …, 2021 - ieeexplore.ieee.org
Time series has numerous application scenarios. However, since many time series data are
personal data, releasing them directly could cause privacy infringement. All existing …

PrivKVM*: Revisiting key-value statistics estimation with local differential privacy

Q Ye, H Hu, X Meng, H Zheng, K Huang… - … on Dependable and …, 2021 - ieeexplore.ieee.org
A key factor in big data analytics and artificial intelligence is the collection of user data from a
large population. However, the collection of user data comes at the price of privacy risks, not …

Monitoring-based differential privacy mechanism against query flooding-based model extraction attack

H Yan, X Li, H Li, J Li, W Sun, F Li - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

SoK: Machine learning governance

V Chandrasekaran, H Jia, A Thudi, A Travers… - arXiv preprint arXiv …, 2021 - arxiv.org
The application of machine learning (ML) in computer systems introduces not only many
benefits but also risks to society. In this paper, we develop the concept of ML governance to …

Stateful detection of model extraction attacks

S Pal, Y Gupta, A Kanade, S Shevade - arXiv preprint arXiv:2107.05166, 2021 - arxiv.org
Machine-Learning-as-a-Service providers expose machine learning (ML) models through
application programming interfaces (APIs) to developers. Recent work has shown that …

Collecting high-dimensional and correlation-constrained data with local differential privacy

R Du, Q Ye, Y Fu, H Hu - 2021 18th Annual IEEE International …, 2021 - ieeexplore.ieee.org
Local differential privacy (LDP) is a promising privacy model for distributed data collection. It
has been widely deployed in real-world systems (eg Chrome, iOS, macOS). In LDP-based …

PNAS: A privacy preserving framework for neural architecture search services

Z Pan, J Zeng, R Cheng, H Yan, J Li - Information Sciences, 2021 - Elsevier
The success of deep neural networks has contributed to many fields, such as finance, medic
and speech recognition. Machine learning models adopted in these fields are always …

First to possess his statistics: Data-free model extraction attack on tabular data

M Tasumi, K Iwahana, N Yanai, K Shishido… - arXiv preprint arXiv …, 2021 - arxiv.org
Model extraction attacks are a kind of attacks where an adversary obtains a machine
learning model whose performance is comparable with one of the victim model through …

Confined gradient descent: Privacy-preserving optimization for federated learning

Y Zhang, G Bai, X Li, S Nepal, RKL Ko - arXiv preprint arXiv:2104.13050, 2021 - arxiv.org
Federated learning enables multiple participants to collaboratively train a model without
aggregating the training data. Although the training data are kept within each participant and …