Monitoring-based differential privacy mechanism against query flooding-based model extraction attack

H Yan, X Li, H Li, J Li, W Sun, F Li - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

Monitoring-based differential privacy mechanism against query-flooding parameter duplication attack

H Yan, X Li, H Li, J Li, W Sun, F Li - arXiv preprint arXiv:2011.00418, 2020 - arxiv.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

Mitigating query-flooding parameter duplication attack on regression models with high-dimensional Gaussian mechanism

X Li, H Li, H Yan, Z Cheng, W Sun, H Zhu - arXiv preprint arXiv …, 2020 - arxiv.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

BDPL: A boundary differentially private layer against machine learning model extraction attacks

H Zheng, Q Ye, H Hu, C Fang, J Shi - … 23–27, 2019, Proceedings, Part I 24, 2019 - Springer
Abstract Machine learning models trained by large volume of proprietary data and intensive
computational resources are valuable assets of their owners, who merchandise these …

Protecting decision boundary of machine learning model with differentially private perturbation

H Zheng, Q Ye, H Hu, C Fang… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Machine learning service API allows model owners to monetize proprietary models by
offering prediction services to third-party users. However, existing literature shows that …

Protecting regression models with personalized local differential privacy

X Li, H Yan, Z Cheng, W Sun… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
The equation-solving model extraction attack is an intuitively simple but devastating attack to
steal confidential information of regression models through a sufficient number of queries …

Differentially private machine learning model against model extraction attack

Z Cheng, Z Li, J Zhang, S Zhang - … International Conferences on …, 2020 - ieeexplore.ieee.org
Machine learning model is vulnerable to model extraction attacks since the attackers can
send plenty of queries to infer the hyperparameters of the machine learning model thus …

A Defense Framework for Privacy Risks in Remote Machine Learning Service

Y Bai, Y Li, M Xie, M Fan - Security and Communication …, 2021 - Wiley Online Library
In recent years, machine learning approaches have been widely adopted for many
applications, including classification. Machine learning models deal with collective sensitive …

Adap CDP-ML: Concentrated Differentially Private machine learning with Adaptive Noise

J Fu, H Cui, S Zhang, X Su - 2023 IEEE 11th Joint International …, 2023 - ieeexplore.ieee.org
Machine learning and big data, foundational pillars of artificial intelligence, are propelling
the societal development, garnering significant attention. Recent studies reveal that …

Preserving differential privacy in complex data analysis

Y Wang - 2015 - search.proquest.com
Omnipresent databases from various resources, such as social networks, electronic
commercial websites, and health related wearable devices, have provided researchers with …