Mitigating adversarial attacks for deep neural networks by input deformation and augmentation

P Qiu, Q Wang, D Wang, Y Lyu, Z Lu… - 2020 25th Asia and …, 2020 - ieeexplore.ieee.org
Typical Deep Neural Networks (DNN) are susceptible to adversarial attacks that add
malicious perturbations to input to mislead the DNN model. Most of the state-of-theart …

Mitigating Adversarial Attacks for Deep Neural Networks by Input Deformation and Augmentation

P Qiu, Q Wang, D Wang, Y Lyu, Z Lu, G Qu - … of the 25th Asia and South …, 2020 - dl.acm.org
Typical Deep Neural Networks (DNN) are susceptible to adversarial attacks that add
malicious perturbations to input to mislead the DNN model. Most of the state-of-the-art …

[PDF][PDF] Mitigating Adversarial Attacks for Deep Neural Networks by Input Deformation and Augmentation

P Qiu, Q Wang, D Wang, Y Lyu, Z Lu, G Qu - csuncle.com
Typical Deep Neural Networks (DNN) are susceptible to adversarial attacks that add
malicious perturbations to input to mislead the DNN model. Most of the state-of-theart …

[PDF][PDF] Mitigating Adversarial Attacks for Deep Neural Networks by Input Deformation and Augmentation

P Qiu, Q Wang, D Wang, Y Lyu, Z Lu, G Qu - csuncle.com
Typical Deep Neural Networks (DNN) are susceptible to adversarial attacks that add
malicious perturbations to input to mislead the DNN model. Most of the state-of-theart …