I know what you see: Power side-channel attack on convolutional neural network accelerators

L Wei, B Luo, Y Li, Y Liu, Q Xu - … of the 34th Annual Computer Security …, 2018 - dl.acm.org
Deep learning has become the de-facto computational paradigm for various kinds of
perception problems, including many privacy-sensitive applications such as online medical …

Reverse engineering convolutional neural networks through side-channel information leaks

W Hua, Z Zhang, GE Suh - Proceedings of the 55th Annual Design …, 2018 - dl.acm.org
A convolutional neural network (CNN) model represents a crucial piece of intellectual
property in many applications. Revealing its structure or weights would leak confidential …

Stealing neural network structure through remote FPGA side-channel analysis

Y Zhang, R Yasaei, H Chen, Z Li… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Deep Neural Network (DNN) models have been extensively developed by companies for a
wide range of applications. The development of a customized DNN model with great …

{CSI}{NN}: Reverse engineering of neural network architectures through electromagnetic side channel

L Batina, S Bhasin, D Jap, S Picek - 28th USENIX Security Symposium …, 2019 - usenix.org
Machine learning has become mainstream across industries. Numerous examples prove the
validity of it for security applications. In this work, we investigate how to reverse engineer a …

CSI neural network: Using side-channels to recover your artificial neural network information

L Batina, S Bhasin, D Jap, S Picek - arXiv preprint arXiv:1810.09076, 2018 - arxiv.org
Machine learning has become mainstream across industries. Numerous examples proved
the validity of it for security applications. In this work, we investigate how to reverse engineer …

The secret revealer: Generative model-inversion attacks against deep neural networks

Y Zhang, R Jia, H Pei, W Wang… - Proceedings of the …, 2020 - openaccess.thecvf.com
This paper studies model-inversion attacks, in which the access to a model is abused to infer
information about the training data. Since its first introduction by [??], such attacks have …

Es attack: Model stealing against deep neural networks without data hurdles

X Yuan, L Ding, L Zhang, X Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Deep neural networks (DNNs) have become the essential components for various
commercialized machine learning services, such as Machine Learning as a Service …

TensorClog: An imperceptible poisoning attack on deep neural network applications

J Shen, X Zhu, D Ma - IEEE Access, 2019 - ieeexplore.ieee.org
Internet application providers now have more incentive than ever to collect user data, which
greatly increases the risk of user privacy violations due to the emerging of deep neural …

Backdoor embedding in convolutional neural network models via invisible perturbation

H Zhong, C Liao, AC Squicciarini, S Zhu… - Proceedings of the Tenth …, 2020 - dl.acm.org
Deep learning models have consistently outperformed traditional machine learning models
in various classification tasks, including image classification. As such, they have become …

Security and privacy issues in deep learning: a brief review

T Ha, TK Dang, H Le, TA Truong - SN Computer Science, 2020 - Springer
Nowadays, deep learning is becoming increasingly important in our daily life. The
appearance of deep learning in many applications in life relates to prediction and …