Biometrics: Trust, but verify

AK Jain, D Deb, JJ Engelsma - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Over the past two decades, biometric recognition has exploded into a plethora of different
applications around the globe. This proliferation can be attributed to the high levels of …

Mitigating evasion attacks to deep neural networks via region-based classification

X Cao, NZ Gong - Proceedings of the 33rd Annual Computer Security …, 2017 - dl.acm.org
Deep neural networks (DNNs) have transformed several artificial intelligence research
areas including computer vision, speech recognition, and natural language processing …

Seeing isn't believing: Towards more robust adversarial attack against real world object detectors

Y Zhao, H Zhu, R Liang, Q Shen, S Zhang… - Proceedings of the 2019 …, 2019 - dl.acm.org
Recently Adversarial Examples (AEs) that deceive deep learning models have been a topic
of intense research interest. Compared with the AEs in the digital space, the physical …

Adversarial examples are a natural consequence of test error in noise

J Gilmer, N Ford, N Carlini… - … Conference on Machine …, 2019 - proceedings.mlr.press
Over the last few years, the phenomenon of adversarial examples—maliciously constructed
inputs that fool trained machine learning models—has captured the attention of the research …

Adversarial examples on object recognition: A comprehensive survey

A Serban, E Poll, J Visser - ACM Computing Surveys (CSUR), 2020 - dl.acm.org
Deep neural networks are at the forefront of machine learning research. However, despite
achieving impressive performance on complex tasks, they can be very sensitive: Small …

Boosting the transferability of adversarial attacks with reverse adversarial perturbation

Z Qin, Y Fan, Y Liu, L Shen, Y Zhang… - Advances in neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples,
which can produce erroneous predictions by injecting imperceptible perturbations. In this …

A survey of adversarial attack and defense methods for malware classification in cyber security

S Yan, J Ren, W Wang, L Sun… - … Surveys & Tutorials, 2022 - ieeexplore.ieee.org
Malware poses a severe threat to cyber security. Attackers use malware to achieve their
malicious purposes, such as unauthorized access, stealing confidential data, blackmailing …

Rays: A ray searching method for hard-label adversarial attack

J Chen, Q Gu - Proceedings of the 26th ACM SIGKDD International …, 2020 - dl.acm.org
Deep neural networks are vulnerable to adversarial attacks. Among different attack settings,
the most challenging yet the most practical one is the hard-label setting where the attacker …

Adversarial defense by restricting the hidden space of deep neural networks

A Mustafa, S Khan, M Hayat… - Proceedings of the …, 2019 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial attacks which can fool them by adding
minuscule perturbations to the input images. The robustness of existing defenses suffers …

Image super-resolution as a defense against adversarial attacks

A Mustafa, SH Khan, M Hayat, J Shen… - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
Convolutional Neural Networks have achieved significant success across multiple computer
vision tasks. However, they are vulnerable to carefully crafted, human-imperceptible …