Research progress and challenges on application-driven adversarial examples: A survey

W Jiang, Z He, J Zhan, W Pan, D Adhikari - ACM Transactions on Cyber …, 2021 - dl.acm.org
Great progress has been made in deep learning over the past few years, which drives the
deployment of deep learning–based applications into cyber-physical systems. But the lack of …

Voltjockey: A new dynamic voltage scaling-based fault injection attack on intel sgx

P Qiu, D Wang, Y Lyu, R Tian… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Intel software guard extensions (SGX) increase the security of applications by enabling them
to be performed in a highly trusted space (called enclave). Most state-of-the-art attacks on …

[HTML][HTML] Resilience-aware MLOps for AI-based medical diagnostic system

V Moskalenko, V Kharchenko - Frontiers in Public Health, 2024 - frontiersin.org
Background The healthcare sector demands a higher degree of responsibility,
trustworthiness, and accountability when implementing Artificial Intelligence (AI) systems …

[PDF][PDF] Resilience-aware MLOps for

V Moskalenko, V Kharchenko - 2024 - scienceopen.com
Background: The healthcare sector demands a higher degree of responsibility,
trustworthiness, and accountability when implementing Artificial Intelligence (AI) systems …

Defending against Adversarial Attacks in Deep Learning with Robust Auxiliary Classifiers Utilizing Bit-plane Slicing

Y Liu, J Dong, P Zhou - ACM Journal on Emerging Technologies in …, 2022 - dl.acm.org
Deep Neural Networks (DNNs) have been widely used in variety of fields with great success.
However, recent research indicates that DNNs are susceptible to adversarial attacks, which …

Prediction Stability: A New Metric for Quantitatively Evaluating DNN Outputs

Q Guo, J Ye, J Zhang, Y Hu, X Li, H Li - … of the 2020 on Great Lakes …, 2020 - dl.acm.org
In many realistic applications, the collected inputs of DNN face a big challenge:
perturbations. Although the perturbations are imperceptible, they may cause incorrect …