Certified robustness against natural language attacks by causal intervention

H Zhao, C Ma, X Dong, AT Luu… - International …, 2022 - proceedings.mlr.press
Deep learning models have achieved great success in many fields, yet they are vulnerable
to adversarial examples. This paper follows a causal perspective to look into the adversarial …

Certified Robustness Against Natural Language Attacks by Causal Intervention

H Zhao, C Ma, X Dong, AT Luu, ZH Deng… - arXiv e …, 2022 - ui.adsabs.harvard.edu
Deep learning models have achieved great success in many fields, yet they are vulnerable
to adversarial examples. This paper follows a causal perspective to look into the adversarial …

Certified Robustness Against Natural Language Attacks by Causal Intervention

H Zhao, C Ma, X Dong, AT Luu, ZH Deng… - arXiv preprint arXiv …, 2022 - arxiv.org
Deep learning models have achieved great success in many fields, yet they are vulnerable
to adversarial examples. This paper follows a causal perspective to look into the adversarial …

[PDF][PDF] Certified Robustness Against Natural Language Attacks by Causal Intervention

H Zhao, C Ma, X Dong, AT Luu, ZH Deng, H Zhang - download.huan-zhang.com
Deep learning models have achieved great success in many fields, yet they are vulnerable
to adversarial examples. This paper follows a causal perspective to look into the adversarial …

[PDF][PDF] Certified Robustness Against Natural Language Attacks by Causal Intervention

H Zhao, C Ma, X Dong, AT Luu, ZH Deng, H Zhang - chang-github-00.github.io
Deep learning models have achieved great success in many fields, yet they are vulnerable
to adversarial examples. This paper follows a causal perspective to look into the adversarial …