Y Xing, Q Song, G Cheng - Advances in neural information …, 2021 - proceedings.neurips.cc
The adversarial training is a popular tool to remedy the vulnerability of deep learning models against adversarial attacks, and there is rich theoretical literature on the training loss of …
Defending against adversarial examples remains an open problem. A common belief is that randomness at inference increases the cost of finding adversarial inputs. An example of …
We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data augmentation that combines the best of interpolation based training and noise injection …
Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles …
J Byun, H Go, C Kim - Proceedings of the IEEE/CVF winter …, 2022 - openaccess.thecvf.com
While deep neural networks show unprecedented performance in various tasks, the vulnerability to adversarial examples hinders their deployment in safety-critical systems …
Mixtures of classifiers (aka randomized ensembles) have been proposed as a way to improve robustness against adversarial attacks. However, it has been shown that existing …
S Singh, JT Khim - Advances in Neural Information …, 2022 - proceedings.neurips.cc
The vast majority of statistical theory on binary classification characterizes performance in terms of accuracy. However, accuracy is known in many cases to poorly reflect the practical …
Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as …
Certified robustness in machine learning has primarily focused on adversarial perturbations with a fixed attack budget for each sample in the input distribution. In this work, we present …