Detecting adversarial data by probing multiple perturbations using expected perturbation score

S Zhang, F Liu, J Yang, Y Yang, C Li… - … on machine learning, 2023 - proceedings.mlr.press
Adversarial detection aims to determine whether a given sample is an adversarial one
based on the discrepancy between natural and adversarial distributions. Unfortunately …

OODRobustBench: benchmarking and analyzing adversarial robustness under distribution shift

L Li, Y Wang, C Sitawarin, M Spratling - arXiv preprint arXiv:2310.12793, 2023 - arxiv.org
Existing works have made great progress in improving adversarial robustness, but typically
test their method only on data from the same distribution as the training data, ie in …

Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets

Y Wang, D Zhang, Y Wu, H Huang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Despite incredible advances, deep learning has been shown to be susceptible to
adversarial attacks. Numerous approaches were proposed to train robust networks both …

CAS-NN: A Robust Cascade Neural Network Without Compromising Clean Accuracy

Z Chen, Z He, Y Zhou, PPK Chan, F Zhang… - … Conference on Neural …, 2023 - Springer
Adversarial training has emerged as a prominent approach for training robust classifiers.
However, recent researches indicate that adversarial training inevitably results in a decline …

Efficient Diversified Attack: Multiple Diversification Strategies Lead to the Efficient Adversarial Attacks

K Yamamura, I Oe, N Hata, H Ishikura, K Fujisawa - openreview.net
Deep learning models are vulnerable to adversarial examples (AEs). Recently, adversarial
attacks that generate AEs by optimizing a multimodal function with many local optimums …