I know what you trained last summer: A survey on stealing machine learning models and defences

D Oliynyk, R Mayer, A Rauber - ACM Computing Surveys, 2023 - dl.acm.org
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making
even the most complex Machine Learning models available for clients via, eg, a pay-per …

APMSA: Adversarial perturbation against model stealing attacks

J Zhang, S Peng, Y Gao, Z Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Training a Deep Learning (DL) model requires proprietary data and computing-intensive
resources. To recoup their training costs, a model provider can monetize DL models through …

[HTML][HTML] Insights into security and privacy issues in smart healthcare systems based on medical images

F Yan, N Li, AM Iliyasu, AS Salama, K Hirota - Journal of Information …, 2023 - Elsevier
The advent of the fourth industrial revolution along with developments in other emerging
technologies, such as Internet of Things, big data, artificial intelligence as well as cloud and …

Entangled watermarks as a defense against model extraction

H Jia, CA Choquette-Choo, V Chandrasekaran… - 30th USENIX security …, 2021 - usenix.org
Machine learning involves expensive data collection and training procedures. Model owners
may be concerned that valuable intellectual property can be leaked if adversaries mount …

Privacy risks of securing machine learning models against adversarial examples

L Song, R Shokri, P Mittal - Proceedings of the 2019 ACM SIGSAC …, 2019 - dl.acm.org
The arms race between attacks and defenses for machine learning models has come to a
forefront in recent years, in both the security community and the privacy community …

Protecting intellectual property of language generation apis with lexical watermark

X He, Q Xu, L Lyu, F Wu, C Wang - … of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
Nowadays, due to the breakthrough in natural language generation (NLG), including
machine translation, document summarization, image captioning, etc NLG models have …

Machine learning security: Threats, countermeasures, and evaluations

M Xue, C Yuan, H Wu, Y Zhang, W Liu - IEEE Access, 2020 - ieeexplore.ieee.org
Machine learning has been pervasively used in a wide range of applications due to its
technical breakthroughs in recent years. It has demonstrated significant success in dealing …

Model stealing attacks against inductive graph neural networks

Y Shen, X He, Y Han, Y Zhang - 2022 IEEE Symposium on …, 2022 - ieeexplore.ieee.org
Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new
family of machine learning (ML) models, have been proposed to fully leverage graph data to …

Muse: Secure inference resilient to malicious clients

R Lehmkuhl, P Mishra, A Srinivasan… - 30th USENIX Security …, 2021 - usenix.org
The increasing adoption of machine learning inference in applications has led to a
corresponding increase in concerns about the privacy guarantees offered by existing …

Defending against model stealing via verifying embedded external features

Y Li, L Zhu, X Jia, Y Jiang, ST Xia, X Cao - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Obtaining a well-trained model involves expensive data collection and training procedures,
therefore the model is a valuable intellectual property. Recent studies revealed that …