X Xian, M Hong, J Ding - arXiv preprint arXiv:2206.11480, 2022 - arxiv.org
The privacy of machine learning models has become a significant concern in many emerging Machine-Learning-as-a-Service applications, where prediction services based on …
X Zhang, C Fang, J Shi - arXiv preprint arXiv:2104.05921, 2021 - arxiv.org
Model extraction increasingly attracts research attentions as keeping commercial AI models private can retain a competitive advantage. In some scenarios, AI models are trained …
Machine learning (ML) and deep learning methods have become common and publicly available, while ML security to date struggles to cope with rising threats. One rising threat is …
Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs. To protect against these attacks, it has been …
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons:(a) a model can be a business …
Y Kilcher, T Hofmann - arXiv preprint arXiv:1711.05475, 2017 - arxiv.org
Black-Box attacks on machine learning models occur when an attacker, despite having no access to the inner workings of a model, can successfully craft an attack by means of model …
Current model extraction attacks assume that the adversary has access to a surrogate dataset with characteristics similar to the proprietary data used to train the victim model. This …
S Zhou, T Zhu, D Ye, X Yu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Model inversion attacks involve reconstructing the training data of a target model, which raises serious privacy concerns for machine learning models. However, these attacks …
JB Truong - Ph. D. dissertation, 2021 - digital.wpi.edu
Current model extraction attacks assume that the adversary has access to a surrogate dataset with characteristics similar to the proprietary data used to train the victim model. This …