AJ Levine, S Feizi - International Conference on Machine …, 2021 - proceedings.mlr.press
Randomized smoothing is a general technique for computing sample-dependent robustness guarantees against adversarial attacks for deep classifiers. Prior works on randomized …
Pretrained models from self-supervision are prevalently used in fine-tuning downstream tasks faster or for better accuracy. However, gaining robustness from pretraining is left …
S Singla, S Singla, S Feizi - arXiv preprint arXiv:2108.04062, 2021 - arxiv.org
Training convolutional neural networks (CNNs) with a strict Lipschitz constraint under the $ l_ {2} $ norm is useful for provable adversarial robustness, interpretable gradients and …
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields. Integrating AT …
We study the problem of training certifiably robust models against adversarial examples. Certifiable training minimizes an upper bound on the worst-case loss over the allowed …
L Li, M Spratling - arXiv preprint arXiv:2301.09879, 2023 - arxiv.org
Adversarial training suffers from the issue of robust overfitting, which seriously impairs its generalization performance. Data augmentation, which is effective at preventing overfitting …
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks. However, existing works on adversarial …
Machine learning models are susceptible to adversarial perturbations: small changes to input that can cause large changes in output. It is also demonstrated that there exist input …
Adversarial examples have pointed out Deep Neural Network's vulnerability to small local noise. It has been shown that constraining their Lipschitz constant should enhance …