A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

X Huang, D Kroening, W Ruan, J Sharp, Y Sun… - Computer Science …, 2020 - Elsevier
In the past few years, significant progress has been made on deep neural networks (DNNs)
in achieving human-level performance on several long-standing tasks. With the broader …

Sok: Certified robustness for deep neural networks

L Li, T Xie, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

Safety concerns and mitigation approaches regarding the use of deep learning in safety-critical perception tasks

O Willers, S Sudholt, S Raafatnia, S Abrecht - Computer Safety, Reliability …, 2020 - Springer
Deep learning methods are widely regarded as indispensable when it comes to designing
perception pipelines for autonomous agents such as robots, drones or automated vehicles …

Safety verification of deep neural networks

X Huang, M Kwiatkowska, S Wang, M Wu - Computer Aided Verification …, 2017 - Springer
Deep neural networks have achieved impressive experimental results in image
classification, but can surprisingly be unstable with respect to adversarial perturbations, that …

Adversarial robustness of deep neural networks: A survey from a formal verification perspective

MH Meng, G Bai, SG Teo, Z Hou, Y Xiao… - … on Dependable and …, 2022 - ieeexplore.ieee.org
Neural networks have been widely applied in security applications such as spam and
phishing detection, intrusion prevention, and malware detection. This black-box method …

Scalable quantitative verification for deep neural networks

T Baluta, ZL Chua, KS Meel… - 2021 IEEE/ACM 43rd …, 2021 - ieeexplore.ieee.org
Despite the functional success of deep Neural Networks, their trustworthiness remains a
crucial open challenge. To address this challenge, both testing and verification techniques …

Empir: Ensembles of mixed precision deep networks for increased robustness against adversarial attacks

S Sen, B Ravindran, A Raghunathan - arXiv preprint arXiv:2004.10162, 2020 - arxiv.org
Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their adoption in safety-
critical applications such as self-driving cars, drones, and healthcare. Notably, DNNs are …

Attribution-based confidence metric for deep neural networks

S Jha, S Raj, S Fernandes, SK Jha… - Advances in …, 2019 - proceedings.neurips.cc
We propose a novel confidence metric, namely, attribution-based confidence (ABC) for deep
neural networks (DNNs). ABC metric characterizes whether the output of a DNN on an input …

Mitigating evasion attacks to deep neural networks via region-based classification

X Cao, NZ Gong - Proceedings of the 33rd Annual Computer Security …, 2017 - dl.acm.org
Deep neural networks (DNNs) have transformed several artificial intelligence research
areas including computer vision, speech recognition, and natural language processing …

Deepdyve: Dynamic verification for deep neural networks

Y Li, M Li, B Luo, Y Tian, Q Xu - Proceedings of the 2020 ACM SIGSAC …, 2020 - dl.acm.org
Deep neural networks (DNNs) have become one of the enabling technologies in many
safety-critical applications, eg, autonomous driving and medical image analysis. DNN …