Detecting adversarial examples from sensitivity inconsistency of spatial-transform domain

J Tian, J Zhou, Y Li, J Duan - Proceedings of the AAAI Conference on …, 2021 - ojs.aaai.org
Deep neural networks (DNNs) have been shown to be vulnerable against adversarial
examples (AEs), which are maliciously designed to cause dramatic model output errors. In
this work, we reveal that normal examples (NEs) are insensitive to the fluctuations occurring
at the highly-curved region of the decision boundary, while AEs typically designed over one
single domain (mostly spatial domain) exhibit exorbitant sensitivity on such fluctuations. This
phenomenon motivates us to design another classifier (called dual classifier) with …

[引用][C] Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain

Z Zhongming, L Linong, Y Xiaona, Z Wangqiang, L Wei - 2022 - repository.um.edu.mo
澳門大學學者庫(UM): Detecting Adversarial Examples from Sensitivity Inconsistency of
Spatial-Transform Domain
以上显示的是最相近的搜索结果。 查看全部搜索结果