Defending pre-trained language models from adversarial word substitutions without performance sacrifice

R Bao, J Wang, H Zhao - arXiv preprint arXiv:2105.14553, 2021 - arxiv.org
Pre-trained contextualized language models (PrLMs) have led to strong performance gains
in downstream natural language understanding tasks. However, PrLMs can still be easily …

[PDF][PDF] Defending Pre-trained Language Models from Adversarial Word Substitution Without Performance Sacrifice

R Bao, J Wang, H Zhao - Findings of the Association for …, 2021 - aclanthology.org
Pre-trained contextualized language models (PrLMs) have led to strong performance gains
in downstream natural language understanding tasks. However, PrLMs can still be easily …

Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice

R Bao, J Wang, H Zhao - arXiv e-prints, 2021 - ui.adsabs.harvard.edu
Pre-trained contextualized language models (PrLMs) have led to strong performance gains
in downstream natural language understanding tasks. However, PrLMs can still be easily …