SV Dibbo, DL Chung, S Mehnaz - 2023 IEEE Conference on …, 2023 - ieeexplore.ieee.org
In this paper, we study model inversion attribute inference (MIAI), a machine learning (ML) privacy attack that aims to infer sensitive information about the training data given access to …
Many of today's machine learning (ML) systems are built by reusing an array of, often pre- trained, primitive models, each fulfilling distinct functionality (eg, feature extraction). The …
Y Chen, R Guan, X Gong, J Dong… - 2023 IEEE Symposium …, 2023 - ieeexplore.ieee.org
Recent studies show that machine learning models are vulnerable to model extraction attacks, where the adversary builds a substitute model that achieves almost the same …
Building advanced machine learning (ML) models requires expert knowledge and many trials to discover the best architecture and hyperparameter settings. Previous work …
Emerging vulnerabilities in machine learning (ML) models due to adversarial attacks raise concerns about their reliability. Specifically, evasion attacks manipulate models by …
Model extraction (ME) attacks represent one major threat to Machine-Learning-as-a-Service (MLaaS) platforms by" stealing" the functionality of confidential machine-learning models …
Current model extraction attacks assume that the adversary has access to a surrogate dataset with characteristics similar to the proprietary data used to train the victim model. This …
Inference attacks against Machine Learning (ML) models allow adversaries to learn sensitive information about training data, model parameters, etc. While researchers have …
As a promising service, Machine Learning as a Service (MLaaS) provides personalized inference functions for clients through paid APIs. Nevertheless, it is vulnerable to model …