Differentially private machine learning model against model extraction attack

Z Cheng, Z Li, J Zhang, S Zhang - … International Conferences on …, 2020 - ieeexplore.ieee.org
Machine learning model is vulnerable to model extraction attacks since the attackers can
send plenty of queries to infer the hyperparameters of the machine learning model thus …

An attack-based evaluation method for differentially private learning against model inversion attack

C Park, D Hong, C Seo - IEEE Access, 2019 - ieeexplore.ieee.org
As the amount of data and computational power explosively increase, valuable results are
being created using machine learning techniques. In particular, models based on deep …

Differential privacy preservation in interpretable feedforward-designed convolutional neural networks

J Wang, Z Tan, X Li, Y Hu - … on trust, security and privacy in …, 2020 - ieeexplore.ieee.org
Feedforward-designed convolutional neural network (FF-CNN) is an interpretable network.
The parameter training of the model does not require backpropagation (BP) and …

BDPL: A boundary differentially private layer against machine learning model extraction attacks

H Zheng, Q Ye, H Hu, C Fang, J Shi - … 23–27, 2019, Proceedings, Part I 24, 2019 - Springer
Abstract Machine learning models trained by large volume of proprietary data and intensive
computational resources are valuable assets of their owners, who merchandise these …

Research and Application Path Analysis of Deep Learning Differential Privacy Protection Method Based on Multiple Data Sources

J Chen, Y Liu - 2022 3rd International Conference on Big Data …, 2022 - atlantis-press.com
The deep learning model will contain user-sensitive information during training. When the
model is applied, the attacker can recover the sensitive information in the training data set …

Differentially private convolutional neural networks with adaptive gradient descent

X Huang, J Guan, B Zhang, S Qi… - 2019 IEEE fourth …, 2019 - ieeexplore.ieee.org
Deep learning achieves remarkable success in the fields of target detection, computer
vision, natural language processing, and speech recognition. However, traditional deep …

Broadening differential privacy for deep learning against model inversion attacks

Q Zhang, J Ma, Y Xiao, J Lou… - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
Deep learning models have achieved great success in many real-world tasks such as image
recognition, machine translation, and self-driving cars. A large amount of data are needed to …

Preserving privacy in convolutional neural network: An∊-tuple differential privacy approach

TA Adesuyi, BM Kim - 2019 IEEE 2nd International Conference …, 2019 - ieeexplore.ieee.org
Recent breakthrough in neural network has led to the birth of Convolutional neural network
(CNN) which has been found to be very efficient especially in the areas of image recognition …

Protecting decision boundary of machine learning model with differentially private perturbation

H Zheng, Q Ye, H Hu, C Fang… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Machine learning service API allows model owners to monetize proprietary models by
offering prediction services to third-party users. However, existing literature shows that …

A practical differentially private support vector machine

F Xu, J Peng, J Xiang, D Zha - … Internet of People and Smart City …, 2019 - ieeexplore.ieee.org
Privacy preserving data analysis is currently one of the research hotspots in the field of
information security. The objective of data analysis is to extract valuable information from …