Better robustness by more coverage: Adversarial training with mixup augmentation for robust fine-tuning

C Si, Z Zhang, F Qi, Z Liu, Y Wang, Q Liu… - arXiv preprint arXiv …, 2020 - arxiv.org
Pretrained language models (PLMs) perform poorly under adversarial attacks. To improve
the adversarial robustness, adversarial data augmentation (ADA) has been widely adopted …

[PDF][PDF] Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust Finetuning

C Si, Z Zhang, F Qi, Z Liu, Y Wang, Q Liu… - Findings of the …, 2021 - aclanthology.org
Pretrained language models (PLMs) perform poorly under adversarial attacks. To improve
the adversarial robustness, adversarial data augmentation (ADA) has been widely adopted …

[引用][C] Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust Finetuning

C Si, Z Zhang, F Qi, Z Liu, Y Wang, Q Liu… - Findings of the …, 2021 - cir.nii.ac.jp
Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust
Finetuning | CiNii Research CiNii 国立情報学研究所 学術情報ナビゲータ[サイニィ] 詳細へ移動 …

Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning

C Si, Z Zhang, F Qi, Z Liu, Y Wang, Q Liu… - arXiv e …, 2020 - ui.adsabs.harvard.edu
Pretrained language models (PLMs) perform poorly under adversarial attacks. To improve
the adversarial robustness, adversarial data augmentation (ADA) has been widely adopted …