Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high …
Y Dong, T Pang, H Su, J Zhu - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is …
Deep learning models are vulnerable to adversarial examples crafted by applying human- imperceptible perturbations on benign inputs. However, under the black-box setting, most …
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples---crafted by adding human-imperceptible …
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed …
Graph deep learning models, such as graph convolutional networks (GCN) achieve remarkable performance for tasks on graph data. Similar to other types of deep models …
Research on adversarial examples in computer vision tasks has shown that small, often imperceptible changes to an image can induce misclassification, which has security …
H Zhang, J Wang - Advances in neural information …, 2019 - proceedings.neurips.cc
We introduce a feature scattering-based adversarial training approach for improving model robustness against adversarial attacks. Conventional adversarial training approaches …
S Ma, Y Liu - Proceedings of the 26th network and distributed system …, 2019 - par.nsf.gov
Deep Neural Networks (DNN) are vulnerable to adversarial samples that are generated by perturbing correctly classified inputs to cause DNN models to misbehave (eg …