Deep neural networks (DNN) have been shown to be useful in a wide range of applications. However, they are also known to be vulnerable to adversarial samples. By transforming a …
Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by regularizing the …
Recently, the vulnerability of deep neural network (DNN)-based audio systems to adversarial attacks has obtained increasing attention. However, the existing audio …
A Tiane, C Okar, H Chaoui - IEEE Transactions on Power …, 2023 - ieeexplore.ieee.org
Neural networks are subject to malicious data poisoning attacks affecting the ability of the model to make accurate predictions. The attacks are generated using adversarial …
AT Nguyen, E Raff - arXiv preprint arXiv:1812.02885, 2018 - arxiv.org
Adversarial attacks against neural networks in a regression setting are a critical yet understudied problem. In this work, we advance the state of the art by investigating …
Abstract The reasons why Deep Neural Networks are susceptible to being fooled by adversarial examples remains an open discussion. Indeed, many different strategies can be …
R Wang, Z Chen, H Dong, Q Xuan - IEEE Access, 2021 - ieeexplore.ieee.org
Many adversarial attack methods have investigated the security issue of deep learning models. Previous works on detecting adversarial samples show superior in accuracy but …
Z Chen, RX Wang, Y Lu, Q Xuan - ICML 2021 Workshop on …, 2021 - openreview.net
Adversarial attacks are the main security issue of deep neural networks. Detecting adversarial samples is an effective mechanism for defending adversarial attacks. Previous …
YF Lai - arXiv preprint arXiv:2405.07884, 2024 - arxiv.org
In the field of machine learning, traditional regularization methods generally tend to directly add regularization terms to the loss function. This paper introduces the" Lai loss", a novel …