GR Machado, E Silva, RR Goldschmidt - ACM Computing Surveys …, 2021 - dl.acm.org
Deep Learning algorithms have achieved state-of-the-art performance for Image Classification. For this reason, they have been used even in security-critical applications …
The intriguing phenomenon of adversarial examples has attracted significant attention in machine learning and what might be more surprising to the community is the existence of …
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks, therefore sophisticated evaluation of their robustness is of great …
Universal adversarial perturbation (UAP), ie a single perturbation to fool the network for most images, is widely recognized as a more practical attack because the UAP can be generated …
Deep neural networks (DNNs) have achieved remarkable success in various tasks (eg, image classification, speech recognition, and natural language processing (NLP)). However …
Post-hoc gradient-based interpretability methods [Simonyan et al., 2013, Smilkov et al., 2017] that provide instance-specific explanations of model predictions are often based on …
J Kim, BK Lee, YM Ro - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Adversarial examples, generated by carefully crafted perturbation, have attracted considerable attention in research fields. Recent works have argued that the existence of the …
M Xu, T Zhang, Z Li, M Liu, D Zhang - Medical Image Analysis, 2021 - Elsevier
Deep learning models (with neural networks) have been widely used in challenging tasks such as computer-aided disease diagnosis based on medical images. Recent studies have …
The booming interest in adversarial attacks stems from a misalignment between human vision and a deep neural network (DNN),\ie~ a human imperceptible perturbation fools the …