J Liu, Y Jin - Journal of Automation and Intelligence, 2023 - Elsevier
Deep learning has presented remarkable progress in various tasks. Despite the excellent performance, deep learning models remain not robust, especially to well-designed …
Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning …
K Kawaguchi, Z Deng, X Ji… - … Conference on Machine …, 2023 - proceedings.mlr.press
Numerous deep learning algorithms have been inspired by and understood via the notion of information bottleneck, where unnecessary information is (often implicitly) minimized while …
H Liu, J Jia, NZ Gong - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Abstract 3D point cloud classification has many safety-critical applications such as autonomous driving and robotic grasping. However, several studies showed that it is …
The Lipschitz constant of neural networks has been established as a key quantity to enforce the robustness to adversarial examples. In this paper, we tackle the problem of building $1 …
J Wang, XE Wang, Y Liu - International Conference on …, 2022 - proceedings.mlr.press
A variety of fairness constraints have been proposed in the literature to mitigate group-level statistical bias. Their impacts have been largely evaluated for different groups of populations …
It is well-known that classifiers are vulnerable to adversarial perturbations. To defend against adversarial perturbations, various certified robustness results have been derived …
While deep neural networks have been achieving state-of-the-art performance across a wide variety of applications, their vulnerability to adversarial attacks limits their widespread …
G Liu, L Lai - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
Due to the broad range of applications of reinforcement learning (RL), understanding the effects of adversarial attacks against RL model is essential for the safe applications of this …