Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete …
N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars …
F Tramer - International Conference on Machine Learning, 2022 - proceedings.mlr.press
Making classifiers robust to adversarial examples is challenging. Thus, many works tackle the seemingly easier task of detecting perturbed inputs. We show a barrier towards this goal …
With the evolution of self-supervised learning, the pre-training paradigm has emerged as a predominant solution within the deep learning landscape. Model providers furnish pre …
CH Ho, N Vasconcelos - Advances in Neural Information …, 2022 - proceedings.neurips.cc
The problem of adversarial defenses for image classification, where the goal is to robustify a classifier against adversarial examples, is considered. Inspired by the hypothesis that these …
When compared to the image classification models, black-box adversarial attacks against video classification models have been largely understudied. This could be possible …
Recent advances in artificial intelligence and the increasing need for robust defensive measures in network security have led to the adoption of deep learning approaches for …
Z Han, XJ Gui, H Sun, Y Yin, S Li - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
In many non-stationary environments, machine learning algorithms usually confront the distribution shift scenarios. Previous domain adaptation methods have achieved great …
Widely deployed deep neural network (DNN) models have been proven to be vulnerable to adversarial perturbations in many applications (eg, image, audio and text classifications). To …