Prediction poisoning: Towards defenses against dnn model stealing attacks

T Orekondy, B Schiele, M Fritz - arXiv preprint arXiv:1906.10908, 2019 - arxiv.org
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-
world applications eg, cloud prediction APIs. Recent advances in model functionality …

Defending against neural network model stealing attacks using deceptive perturbations

T Lee, B Edwards, I Molloy, D Su - 2019 IEEE Security and …, 2019 - ieeexplore.ieee.org
Machine learning architectures are readily available, but obtaining the high quality labeled
data for training is costly. Pre-trained models available as cloud services can be used to …

[PDF][PDF] CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples.

H Yu, K Yang, T Zhang, YY Tsai, TY Ho, Y Jin - NDSS, 2020 - ndss-symposium.org
Cloud-based Machine Learning as a Service (MLaaS) is gradually gaining acceptance as a
reliable solution to various real-life scenarios. These services typically utilize Deep Neural …

APMSA: Adversarial perturbation against model stealing attacks

J Zhang, S Peng, Y Gao, Z Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Training a Deep Learning (DL) model requires proprietary data and computing-intensive
resources. To recoup their training costs, a model provider can monetize DL models through …

PRADA: protecting against DNN model stealing attacks

M Juuti, S Szyller, S Marchal… - 2019 IEEE European …, 2019 - ieeexplore.ieee.org
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality
of ML models becomes paramount for two reasons:(a) a model can be a business …

Defending against model stealing attacks with adaptive misinformation

S Kariyappa, MK Qureshi - … of the IEEE/CVF Conference on …, 2020 - openaccess.thecvf.com
Abstract Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which
allows a data-limited adversary with no knowledge of the training dataset to clone the …

Towards practical deployment-stage backdoor attack on deep neural networks

X Qi, T Xie, R Pan, J Zhu, Y Yang… - Proceedings of the …, 2022 - openaccess.thecvf.com
One major goal of the AI security community is to securely and reliably produce and deploy
deep learning models for real-world applications. To this end, data poisoning based …

Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency

J Guo, Y Li, X Chen, H Guo, L Sun, C Liu - arXiv preprint arXiv:2302.03251, 2023 - arxiv.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries
embed a hidden backdoor trigger during the training process for malicious prediction …

Es attack: Model stealing against deep neural networks without data hurdles

X Yuan, L Ding, L Zhang, X Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Deep neural networks (DNNs) have become the essential components for various
commercialized machine learning services, such as Machine Learning as a Service …

Latent backdoor attacks on deep neural networks

Y Yao, H Li, H Zheng, BY Zhao - Proceedings of the 2019 ACM SIGSAC …, 2019 - dl.acm.org
Recent work proposed the concept of backdoor attacks on deep neural networks (DNNs),
where misclassification rules are hidden inside normal models, only to be triggered by very …