Counterfactuals provide guidance on achieving a favorable outcome from a model, with minimum input perturbation. However, counterfactuals can also be exploited to leak …
Model extraction attacks are designed to steal trained models with only query access, as is often provided through APIs that ML-as-a-Service providers offer. ML models are expensive …
P Karmakar, D Basu - arXiv preprint arXiv:2302.08466, 2023 - arxiv.org
We study design of black-box model extraction attacks that can send minimal number of queries from a publicly available dataset to a target ML model through a predictive API with …
Cloud service providers, including Google, Amazon, and Alibaba, have now launched machinelearning-as-a-service (MLaaS) platforms, allowing clients to access sophisticated …
Y Chen, R Guan, X Gong, J Dong… - 2023 IEEE Symposium …, 2023 - ieeexplore.ieee.org
Recent studies show that machine learning models are vulnerable to model extraction attacks, where the adversary builds a substitute model that achieves almost the same …
The outstanding performance of deep learning has prompted the rise of Machine Learning as a Service (MLaaS), which significantly reduces the difficulty for users to train and deploy …
Abstract Machine learning models are increasingly being offered as a service by big companies such as Google, Microsoft and Amazon. They use Machine Learning as a …
Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs. To protect against these attacks, it has been …
As a promising service, Machine Learning as a Service (MLaaS) provides personalized inference functions for clients through paid APIs. Nevertheless, it is vulnerable to model …