Analyzing the noise robustness of deep neural networks

M Liu, S Liu, H Su, K Cao, J Zhu - 2018 IEEE Conference on …, 2018 - ieeexplore.ieee.org
Deep neural networks (DNNs) are vulnerable to maliciously generated adversarial
examples. These examples are intentionally designed by making imperceptible …

An adversarial perturbation approach against CNN-based soft biometrics detection

S Marrone, C Sansone - 2019 International Joint Conference …, 2019 - ieeexplore.ieee.org
The use of biometric-based authentication systems spread over daily life consumer
electronics. Over the years, researchers' interest shifted from hard (such as fingerprints …

Simultaneous adversarial training-learn from others' mistakes

Z Liao - 2019 14th IEEE International Conference on Automatic …, 2019 - ieeexplore.ieee.org
Adversarial examples are maliciously tweaked images that can easily fool machine learning
techniques, such as neural networks, but they are normally not visually distinguishable for …

Mixing between the Cross Entropy and the Expectation Loss Terms

B Battash, L Wolf, T Hazan - arXiv preprint arXiv:2109.05635, 2021 - arxiv.org
The cross entropy loss is widely used due to its effectiveness and solid theoretical
grounding. However, as training progresses, the loss tends to focus on hard to classify …

TEAM: An Taylor Expansion-Based Method for Generating Adversarial Examples

Y Qian, XM Zhang, W Swaileh, L Wei, B Wang… - arXiv preprint arXiv …, 2020 - arxiv.org
Although Deep Neural Networks (DNNs) have achieved successful applications in many
fields, they are vulnerable to adversarial examples. Adversarial training is one of the most …