Towards lightweight black-box attack against deep neural networks

C Sun, Y Zhang, W Chaoqun, Q Wang… - Advances in …, 2022 - proceedings.neurips.cc
Black-box attacks can generate adversarial examples without accessing the parameters of
target model, largely exacerbating the threats of deployed deep neural networks (DNNs) …

Expected Perturbation Scores for Adversarial Detection

S Zhang, F Liu, J Yang, Y Yang, B Han, M Tan - openreview.net
Adversarial detection aims to determine whether a given sample is an adversarial one
based on the discrepancy between natural and adversarial distributions. Unfortunately …