Defending against model extraction attacks with physical unclonable function

D Li, D Liu, Y Guo, Y Ren, J Su, J Liu - Information Sciences, 2023 - Elsevier
Abstract Machine learning models, especially deep neural network (DNN) models, have
widespread and valuable applications in business activities. Training a deep learning model …

Quda: Query-limited data-free model extraction

Z Lin, K Xu, C Fang, H Zheng… - Proceedings of the …, 2023 - dl.acm.org
Model extraction attack typically refers to extracting non-public information from a black-box
machine learning model. Its unauthorized nature poses significant threat to intellectual …

Extraction of complex dnn models: Real threat or boogeyman?

BG Atli, S Szyller, M Juuti, S Marchal… - … Dependable and Secure …, 2020 - Springer
Recently, machine learning (ML) has introduced advanced solutions to many domains.
Since ML models provide business advantage to model owners, protecting intellectual …

Es attack: Model stealing against deep neural networks without data hurdles

X Yuan, L Ding, L Zhang, X Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Deep neural networks (DNNs) have become the essential components for various
commercialized machine learning services, such as Machine Learning as a Service …

Protecting dnns from theft using an ensemble of diverse models

S Kariyappa, A Prakash, MK Qureshi - International Conference on …, 2021 - par.nsf.gov
Several recent works have demonstrated highly effective model stealing (MS) attacks on
Deep Neural Networks (DNNs) in black-box settings, even when the training data is …

PUF-based intellectual property protection for CNN model

D Li, Y Ren, D Liu, Z Guan, Q Zhang, Y Wang… - … on Knowledge Science …, 2022 - Springer
It usually takes a lot of time and resources to train a high-accurate Machine Learning model,
so it is believed that the trainer owns the Intellectual Property (IP) of the model. With the help …

Defending against model extraction attacks with OOD feature learning and decision boundary confusion

C Liang, J Huang, Z Zhang, S Zhang - Computers & Security, 2024 - Elsevier
Recent studies have demonstrated that Deep Neural Networks (DNNs) are vulnerable to
model extraction attacks. In these attacks, the malicious users utilize Out-Of-Distribution …

Defending against neural network model stealing attacks using deceptive perturbations

T Lee, B Edwards, I Molloy, D Su - 2019 IEEE Security and …, 2019 - ieeexplore.ieee.org
Machine learning architectures are readily available, but obtaining the high quality labeled
data for training is costly. Pre-trained models available as cloud services can be used to …

Backdoor attacks via machine unlearning

Z Liu, T Wang, M Huai, C Miao - … of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
As a new paradigm to erase data from a model and protect user privacy, machine
unlearning has drawn significant attention. However, existing studies on machine …

POSTER: Attack on non-linear physical unclonable function

J Ye, Y Hu, X Li - Proceedings of the 2016 ACM SIGSAC Conference on …, 2016 - dl.acm.org
Physical Unclonable Function (PUF) is a promising hardware security primitive with broad
application prospect. However, the strong PUF with numerous Challenge and Response …