Abstract Machine learning models trained by large volume of proprietary data and intensive computational resources are valuable assets of their owners, who merchandise these …
Machine learning service API allows model owners to monetize proprietary models by offering prediction services to third-party users. However, existing literature shows that …
The equation-solving model extraction attack is an intuitively simple but devastating attack to steal confidential information of regression models through a sufficient number of queries …
D Ye, S Shen, T Zhu, B Liu… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Machine learning models are vulnerable to data inference attacks, such as membership inference and model inversion attacks. In these types of breaches, an adversary attempts to …
In model extraction attacks, adversaries can steal a machine learning model exposed via a public API by repeatedly querying it and adjusting their own model based on obtained …
C Park, D Hong, C Seo - IEEE Access, 2019 - ieeexplore.ieee.org
As the amount of data and computational power explosively increase, valuable results are being created using machine learning techniques. In particular, models based on deep …
Y Yin, K Chen, L Shou, G Chen - Proceedings of the 27th ACM SIGKDD …, 2021 - dl.acm.org
Membership Inference Attack (MIA) in deep learning is a common form of privacy attack which aims to infer whether a data sample is in a target classifier's training dataset or not …
Data holders are increasingly seeking to protect their user's privacy, whilst still maximizing their ability to produce machine learning (ML) models with high quality predictions. In this …
Although query-based systems (QBS) have become one of the main solutions to share data anonymously, building QBSes that robustly protect the privacy of individuals contributing to …