I know what you trained last summer: A survey on stealing machine learning models and defences

D Oliynyk, R Mayer, A Rauber - ACM Computing Surveys, 2023 - dl.acm.org
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making
even the most complex Machine Learning models available for clients via, eg, a pay-per …

A comprehensive review on deep learning algorithms: Security and privacy issues

M Tayyab, M Marjani, NZ Jhanjhi, IAT Hashem… - Computers & …, 2023 - Elsevier
Abstract Machine Learning (ML) algorithms are used to train the machines to perform
various complicated tasks that begin to modify and improve with experiences. It has become …

Reconstructing training data from trained neural networks

N Haim, G Vardi, G Yehudai… - Advances in Neural …, 2022 - proceedings.neurips.cc
Understanding to what extent neural networks memorize training data is an intriguing
question with practical and theoretical implications. In this paper we show that in some …

Deepsteal: Advanced model extractions leveraging efficient weight stealing in memories

AS Rakin, MHI Chowdhuryy, F Yao… - 2022 IEEE symposium …, 2022 - ieeexplore.ieee.org
Recent advancements in Deep Neural Networks (DNNs) have enabled widespread
deployment in multiple security-sensitive domains. The need for resource-intensive training …

Fingerprinting deep neural networks globally via universal adversarial perturbations

Z Peng, S Li, G Chen, C Zhang… - Proceedings of the …, 2022 - openaccess.thecvf.com
In this paper, we propose a novel and practical mechanism which enables the service
provider to verify whether a suspect model is stolen from the victim model via model …

Copy, right? a testing framework for copyright protection of deep learning models

J Chen, J Wang, T Peng, Y Sun… - … IEEE symposium on …, 2022 - ieeexplore.ieee.org
Deep learning models, especially those large-scale and high-performance ones, can be
very costly to train, demanding a considerable amount of data and computational resources …

Sok: How robust is image classification deep neural network watermarking?

N Lukas, E Jiang, X Li… - 2022 IEEE Symposium on …, 2022 - ieeexplore.ieee.org
Deep Neural Network (DNN) watermarking is a method for provenance verification of DNN
models. Watermarking should be robust against watermark removal attacks that derive a …

Model stealing attacks against inductive graph neural networks

Y Shen, X He, Y Han, Y Zhang - 2022 IEEE Symposium on …, 2022 - ieeexplore.ieee.org
Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new
family of machine learning (ML) models, have been proposed to fully leverage graph data to …

A systematic review on model watermarking for neural networks

F Boenisch - Frontiers in big Data, 2021 - frontiersin.org
Machine learning (ML) models are applied in an increasing variety of domains. The
availability of large amounts of data and computational resources encourages the …

MP2ML: A mixed-protocol machine learning framework for private inference

F Boemer, R Cammarota, D Demmler… - Proceedings of the 15th …, 2020 - dl.acm.org
Privacy-preserving machine learning (PPML) has many applications, from medical image
classification and anomaly detection to financial analysis. nGraph-HE enables data …