[HTML][HTML] Атаки на системы машинного обучения-общие проблемы и методы

ДЕ Намиот, ЕА Ильюшин, ИВ Чижов - International Journal of …, 2022 - cyberleninka.ru
В работе рассматривается проблема атак на системы машинного обучения. Под
такими атаками понимаются специальные воздействия на элементы конвейера …

Protecting deep neural network intellectual property with architecture-agnostic input obfuscation

B Olney, R Karam - Proceedings of the Great Lakes Symposium on VLSI …, 2022 - dl.acm.org
Deep Convolutional Neural Networks (DCNNs) have revolutionized and improved many
aspects of modern life. However, these models are increasingly more complex, and training …

Evaluating Efficacy of Model Stealing Attacks and Defenses on Quantum Neural Networks

S Kundu, D Kundu, S Ghosh - … of the Great Lakes Symposium on VLSI …, 2024 - dl.acm.org
Cloud hosting of quantum machine learning (QML) models exposes them to a range of
vulnerabilities, the most significant of which is the model stealing attack. In this study, we …

Digital Privacy Under Attack: Challenges and Enablers

B Song, M Deng, SR Pokhrel, Q Lan, R Doss… - arXiv preprint arXiv …, 2023 - arxiv.org
Users have renewed interest in protecting their private data in the digital space. When they
don't believe that their privacy is sufficiently covered by one platform, they will readily switch …

DeFedHDP: Fully Decentralized Online Federated Learning for Heart Disease Prediction in Computational Health Systems

M Wei, J Yang, Z Zhao, X Zhang, J Li… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Heart disease is a leading global cause of death, while federated learning (FL) is an
effective way to predict it. Due to patient privacy concerns and the centralized nature of …

Recent Advances of Differential Privacy in Centralized Deep Learning: A Systematic Survey

L Demelius, R Kern, A Trügler - arXiv preprint arXiv:2309.16398, 2023 - arxiv.org
Differential Privacy has become a widely popular method for data protection in machine
learning, especially since it allows formulating strict mathematical privacy guarantees. This …

Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation

V Hondru, RT Ionescu - arXiv preprint arXiv:2310.00096, 2023 - arxiv.org
Diffusion models showcased strong capabilities in image synthesis, being used in many
computer vision tasks with great success. To this end, we propose to explore a new use …

Poisoning-Free Defense Against Black-Box Model Extraction

H Zhang, G Hua, W Yang - ICASSP 2024-2024 IEEE …, 2024 - ieeexplore.ieee.org
Recent research has shown that an adversary can use a surrogate model to steal the
functionality of a target deep learning model even under the black-box condition and without …

Securing Artificial Intelligence: Exploring Attack Scenarios and Defense Strategies

İZ Altun, AE Özkök - … on Digital Forensics and Security (ISDFS), 2024 - ieeexplore.ieee.org
In today's landscape, the widespread integration of artificial intelligence (AI) solutions across
diverse domains has become commonplace. Yet, despite its omnipresence, AI applications …

Protecting Bilateral Privacy in Machine Learning-as-a-Service: A Differential Privacy Based Defense

L Wang, H Yan, X Lin, P Xiong - International Conference on Artificial …, 2023 - Springer
With the continuous promotion and deepened application of Machine Learning-as-a-Service
(MLaaS) across various societal domains, its privacy problems occur frequently and receive …