Contextualized perturbation for textual adversarial attack

D Li, Y Zhang, H Peng, L Chen, C Brockett… - arXiv preprint arXiv …, 2020 - arxiv.org
Adversarial examples expose the vulnerabilities of natural language processing (NLP)
models, and can be used to evaluate and improve their robustness. Existing techniques of …

Contextualized Perturbation for Textual Adversarial Attack

D Li, Y Zhang, H Peng, L Chen… - … OF THE NORTH …, 2021 - openresearch.surrey.ac.uk
Adversarial examples expose the vulnerabilities of natural language processing (NLP)
models, and can be used to evaluate and improve their robustness. Existing techniques of …

Contextualized Perturbation for Textual Adversarial Attack

D Li, Y Zhang, H Peng, L Chen, C Brockett… - Proceedings of the …, 2021 - aclanthology.org
Adversarial examples expose the vulnerabilities of natural language processing (NLP)
models, and can be used to evaluate and improve their robustness. Existing techniques of …

[引用][C] Contextualized Perturbation for Textual Adversarial Attack

D Li, Y Zhang, H Peng, L Chen, C Brockett… - Proceedings of the …, 2021 - cir.nii.ac.jp
Contextualized Perturbation for Textual Adversarial Attack | CiNii Research CiNii 国立情報学
研究所 学術情報ナビゲータ[サイニィ] 詳細へ移動 検索フォームへ移動 論文・データをさがす 大学 …

Contextualized Perturbation for Textual Adversarial Attack

D Li, Y Zhang, H Peng, L Chen, C Brockett… - arXiv e …, 2020 - ui.adsabs.harvard.edu
Adversarial examples expose the vulnerabilities of natural language processing (NLP)
models, and can be used to evaluate and improve their robustness. Existing techniques of …