Monitoring-based differential privacy mechanism against query-flooding parameter duplication attack

H Yan, X Li, H Li, J Li, W Sun, F Li - arXiv preprint arXiv:2011.00418, 2020 - arxiv.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

Monitoring-based differential privacy mechanism against query flooding-based model extraction attack

H Yan, X Li, H Li, J Li, W Sun, F Li - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

BDPL: A boundary differentially private layer against machine learning model extraction attacks

H Zheng, Q Ye, H Hu, C Fang, J Shi - … 23–27, 2019, Proceedings, Part I 24, 2019 - Springer
Abstract Machine learning models trained by large volume of proprietary data and intensive
computational resources are valuable assets of their owners, who merchandise these …

Protecting decision boundary of machine learning model with differentially private perturbation

H Zheng, Q Ye, H Hu, C Fang… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Machine learning service API allows model owners to monetize proprietary models by
offering prediction services to third-party users. However, existing literature shows that …

Adap CDP-ML: Concentrated Differentially Private machine learning with Adaptive Noise

J Fu, H Cui, S Zhang, X Su - 2023 IEEE 11th Joint International …, 2023 - ieeexplore.ieee.org
Machine learning and big data, foundational pillars of artificial intelligence, are propelling
the societal development, garnering significant attention. Recent studies reveal that …

Protecting regression models with personalized local differential privacy

X Li, H Yan, Z Cheng, W Sun… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
The equation-solving model extraction attack is an intuitively simple but devastating attack to
steal confidential information of regression models through a sufficient number of queries …

Differentially private machine learning model against model extraction attack

Z Cheng, Z Li, J Zhang, S Zhang - … International Conferences on …, 2020 - ieeexplore.ieee.org
Machine learning model is vulnerable to model extraction attacks since the attackers can
send plenty of queries to infer the hyperparameters of the machine learning model thus …

Attack-Aware Noise Calibration for Differential Privacy

B Kulynych, JF Gomez, G Kaissis, FP Calmon… - arXiv preprint arXiv …, 2024 - arxiv.org
Differential privacy (DP) is a widely used approach for mitigating privacy risks when training
machine learning models on sensitive data. DP mechanisms add noise during training to …

A Defense Framework for Privacy Risks in Remote Machine Learning Service

Y Bai, Y Li, M Xie, M Fan - Security and Communication …, 2021 - Wiley Online Library
In recent years, machine learning approaches have been widely adopted for many
applications, including classification. Machine learning models deal with collective sensitive …

Differentially private data generative models

Q Chen, C Xiang, M Xue, B Li, N Borisov… - arXiv preprint arXiv …, 2018 - arxiv.org
Deep neural networks (DNNs) have recently been widely adopted in various applications,
and such success is largely due to a combination of algorithmic breakthroughs, computation …