Deep learning-based anomaly detection in cyber-physical systems: Progress and opportunities

Y Luo, Y Xiao, L Cheng, G Peng, D Yao - ACM Computing Surveys …, 2021 - dl.acm.org
Anomaly detection is crucial to ensure the security of cyber-physical systems (CPS).
However, due to the increasing complexity of CPSs and more sophisticated attacks …

A review of explainable deep learning cancer detection models in medical imaging

MA Gulum, CM Trombley, M Kantardzic - Applied Sciences, 2021 - mdpi.com
Deep learning has demonstrated remarkable accuracy analyzing images for cancer
detection tasks in recent years. The accuracy that has been achieved rivals radiologists and …

Trustworthy ai: A computational perspective

H Liu, Y Wang, W Fan, X Liu, Y Li, S Jain, Y Liu… - ACM Transactions on …, 2022 - dl.acm.org
In the past few decades, artificial intelligence (AI) technology has experienced swift
developments, changing everyone's daily life and profoundly altering the course of human …

Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks

Y Cao, N Wang, C Xiao, D Yang, J Fang… - … IEEE symposium on …, 2021 - ieeexplore.ieee.org
In Autonomous Driving (AD) systems, perception is both security and safety critical. Despite
various prior studies on its security issues, all of them only consider attacks on camera-or …

On the (in) fidelity and sensitivity of explanations

CK Yeh, CY Hsieh, A Suggala… - Advances in neural …, 2019 - proceedings.neurips.cc
We consider objective evaluation measures of saliency explanations for complex black-box
machine learning models. We propose simple robust variants of two notions that have been …

Februus: Input purification defense against trojan attacks on deep neural network systems

BG Doan, E Abbasnejad, DC Ranasinghe - Proceedings of the 36th …, 2020 - dl.acm.org
We propose Februus; a new idea to neutralize highly potent and insidious Trojan attacks on
Deep Neural Network (DNN) systems at run-time. In Trojan attacks, an adversary activates a …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

{CADE}: Detecting and explaining concept drift samples for security applications

L Yang, W Guo, Q Hao, A Ciptadi… - 30th USENIX Security …, 2021 - usenix.org
Concept drift poses a critical challenge to deploy machine learning models to solve practical
security problems. Due to the dynamic behavior changes of attackers (and/or the benign …

A survey of data-driven and knowledge-aware explainable ai

XH Li, CC Cao, Y Shi, W Bai, H Gao… - … on Knowledge and …, 2020 - ieeexplore.ieee.org
We are witnessing a fast development of Artificial Intelligence (AI), but it becomes
dramatically challenging to explain AI models in the past decade.“Explanation” has a flexible …

Does physical adversarial example really matter to autonomous driving? towards system-level effect of adversarial object evasion attack

N Wang, Y Luo, T Sato, K Xu… - Proceedings of the …, 2023 - openaccess.thecvf.com
In autonomous driving (AD), accurate perception is indispensable to achieving safe and
secure driving. Due to its safety-criticality, the security of AD perception has been widely …