Adversarial Robustness Toolbox v1. 0.0

MI Nicolae, M Sinn, MN Tran, B Buesser… - arXiv preprint arXiv …, 2018 - arxiv.org
Adversarial Robustness Toolbox (ART) is a Python library supporting developers and
researchers in defending Machine Learning models (Deep Neural Networks, Gradient …

Deeprobust: a platform for adversarial attacks and defenses

Y Li, W Jin, H Xu, J Tang - Proceedings of the AAAI Conference on …, 2021 - ojs.aaai.org
DeepRobust is a PyTorch platform for generating adversarial examples and building robust
machine learning models for different data domains. Users can easily evaluate the attack …

Indicators of attack failure: Debugging and improving optimization of adversarial examples

M Pintor, L Demetrio, A Sotgiu… - Advances in …, 2022 - proceedings.neurips.cc
Evaluating robustness of machine-learning models to adversarial examples is a challenging
problem. Many defenses have been shown to provide a false sense of robustness by …

Opportunities and challenges in deep learning adversarial robustness: A survey

SH Silva, P Najafirad - arXiv preprint arXiv:2007.00753, 2020 - arxiv.org
As we seek to deploy machine learning models beyond virtual and controlled domains, it is
critical to analyze not only the accuracy or the fact that it works most of the time, but if such a …

Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Foolbox: A python toolbox to benchmark the robustness of machine learning models

J Rauber, W Brendel, M Bethge - arXiv preprint arXiv:1707.04131, 2017 - arxiv.org
Even todays most advanced machine learning models are easily fooled by almost
imperceptible perturbations of their inputs. Foolbox is a new Python package to generate …

Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

F Croce, M Hein - International conference on machine …, 2020 - proceedings.mlr.press
The field of defense strategies against adversarial attacks has significantly grown over the
last years, but progress is hampered as the evaluation of adversarial defenses is often …

Magnet: a two-pronged defense against adversarial examples

D Meng, H Chen - Proceedings of the 2017 ACM SIGSAC conference on …, 2017 - dl.acm.org
Deep learning has shown impressive performance on hard perceptual problems. However,
researchers found deep learning systems to be vulnerable to small, specially crafted …

Scaling adversarial training to large perturbation bounds

S Addepalli, S Jain, G Sriramanan… - … on Computer Vision, 2022 - Springer
Abstract The vulnerability of Deep Neural Networks to Adversarial Attacks has fuelled
research towards building robust models. While most Adversarial Training algorithms aim at …