A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

X Huang, D Kroening, W Ruan, J Sharp, Y Sun… - Computer Science …, 2020 - Elsevier
In the past few years, significant progress has been made on deep neural networks (DNNs)
in achieving human-level performance on several long-standing tasks. With the broader …

Algorithms for verifying deep neural networks

C Liu, T Arnon, C Lazarus, C Strong… - … and Trends® in …, 2021 - nowpublishers.com
Deep neural networks are widely used for nonlinear function approximation, with
applications ranging from computer vision to control. Although these networks involve the …

The fallacy of AI functionality

ID Raji, IE Kumar, A Horowitz, A Selbst - … of the 2022 ACM Conference on …, 2022 - dl.acm.org
Deployed AI systems often do not work. They can be constructed haphazardly, deployed
indiscriminately, and promoted deceptively. However, despite this reality, scholars, the …

Overfitting in adversarially robust deep learning

L Rice, E Wong, Z Kolter - International conference on …, 2020 - proceedings.mlr.press
It is common practice in deep learning to use overparameterized networks and train for as
long as possible; there are numerous studies that show, both theoretically and empirically …

Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification

S Wang, H Zhang, K Xu, X Lin, S Jana… - Advances in …, 2021 - proceedings.neurips.cc
Bound propagation based incomplete neural network verifiers such as CROWN are very
efficient and can significantly accelerate branch-and-bound (BaB) based complete …

Certified adversarial robustness via randomized smoothing

J Cohen, E Rosenfeld, Z Kolter - international conference on …, 2019 - proceedings.mlr.press
We show how to turn any classifier that classifies well under Gaussian noise into a new
classifier that is certifiably robust to adversarial perturbations under the L2 norm. While this" …

[HTML][HTML] The marabou framework for verification and analysis of deep neural networks

G Katz, DA Huang, D Ibeling, K Julian… - … Aided Verification: 31st …, 2019 - Springer
Deep neural networks are revolutionizing the way complex systems are designed.
Consequently, there is a pressing need for tools and techniques for network analysis and …

Provably robust deep learning via adversarially trained smoothed classifiers

H Salman, J Li, I Razenshteyn… - Advances in neural …, 2019 - proceedings.neurips.cc
Recent works have shown the effectiveness of randomized smoothing as a scalable
technique for building neural network-based classifiers that are provably robust to $\ell_2 …

An abstract domain for certifying neural networks

G Singh, T Gehr, M Püschel, M Vechev - Proceedings of the ACM on …, 2019 - dl.acm.org
We present a novel method for scalable and precise certification of deep neural networks.
The key technical insight behind our approach is a new abstract domain which combines …

Efficient neural network robustness certification with general activation functions

H Zhang, TW Weng, PY Chen… - Advances in neural …, 2018 - proceedings.neurips.cc
Finding minimum distortion of adversarial examples and thus certifying robustness in neural
networks classifiers is known to be a challenging problem. Nevertheless, recently it has …