[PDF][PDF] Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks.

T Du, S Ji, L Shen, Y Zhang, J Li, J Shi, C Fang, J Yin… - CCS, 2021 - nesa.zju.edu.cn
Certifiable robustness, the functionality of verifying whether the given region surrounding a
data point admits any adversarial example, provides guaranteed security for neural …

Towards certifying the asymmetric robustness for neural networks: Quantification and applications

C Li, S Ji, H Weng, B Li, J Shi, R Beyah… - … on Dependable and …, 2021 - ieeexplore.ieee.org
One intriguing property of deep neural networks (DNNs) is their vulnerability to adversarial
examples–those maliciously crafted inputs that deceive target DNNs. While a plethora of …

Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks

Y Tsuzuku, I Sato, M Sugiyama - Advances in neural …, 2018 - proceedings.neurips.cc
High sensitivity of neural networks against malicious perturbations on inputs causes security
concerns. To take a steady step towards robust classifiers, we aim to create neural network …

Fooling a complete neural network verifier

D Zombori, B Bánhelyi, T Csendes, M Jelasity - 2021 - real.mtak.hu
The efficient and accurate characterization of the robustness of neural networks to input
perturbation is an important open problem. Many approaches exist including heuristic and …

Adversarial robustness of deep neural networks: A survey from a formal verification perspective

MH Meng, G Bai, SG Teo, Z Hou, Y Xiao… - … on Dependable and …, 2022 - ieeexplore.ieee.org
Neural networks have been widely applied in security applications such as spam and
phishing detection, intrusion prevention, and malware detection. This black-box method …

Scalable quantitative verification for deep neural networks

T Baluta, ZL Chua, KS Meel… - 2021 IEEE/ACM 43rd …, 2021 - ieeexplore.ieee.org
Despite the functional success of deep Neural Networks, their trustworthiness remains a
crucial open challenge. To address this challenge, both testing and verification techniques …

Cnn-cert: An efficient framework for certifying robustness of convolutional neural networks

A Boopathy, TW Weng, PY Chen, S Liu… - Proceedings of the AAAI …, 2019 - ojs.aaai.org
Verifying robustness of neural network classifiers has attracted great interests and attention
due to the success of deep neural networks and their unexpected vulnerability to adversarial …

Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance

W Ruan, M Wu, Y Sun, X Huang, D Kroening… - IJCAI-19, 2019 - ora.ox.ac.uk
Deployment of deep neural networks (DNNs) in safety-critical systems requires provable
guarantees for their correct behaviours. We compute the maximal radius of a safe norm ball …

Double bubble, toil and trouble: enhancing certified robustness through transitivity

A Cullen, P Montague, S Liu, S Erfani… - Advances in Neural …, 2022 - proceedings.neurips.cc
In response to subtle adversarial examples flipping classifications of neural network models,
recent research has promoted certified robustness as a solution. There, invariance of …

Debona: Decoupled boundary network analysis for tighter bounds and faster adversarial robustness proofs

C Brix, T Noll - arXiv preprint arXiv:2006.09040, 2020 - arxiv.org
Neural networks are commonly used in safety-critical real-world applications. Unfortunately,
the predicted output is often highly sensitive to small, and possibly imperceptible, changes to …