A survey on automated fact-checking

Z Guo, M Schlichtkrull, A Vlachos - Transactions of the Association for …, 2022 - direct.mit.edu
Fact-checking has become increasingly important due to the speed with which both
information and misinformation can spread in the modern media ecosystem. Therefore …

Efficient methods for natural language processing: A survey

M Treviso, JU Lee, T Ji, B Aken, Q Cao… - Transactions of the …, 2023 - direct.mit.edu
Recent work in natural language processing (NLP) has yielded appealing results from
scaling model parameters and training data; however, using only scale to improve …

Measure and improve robustness in NLP models: A survey

X Wang, H Wang, D Yang - arXiv preprint arXiv:2112.08313, 2021 - arxiv.org
As NLP models achieved state-of-the-art performances over benchmarks and gained wide
applications, it has been increasingly important to ensure the safe deployment of these …

An empirical study on robustness to spurious correlations using pre-trained language models

L Tu, G Lalwani, S Gella, H He - Transactions of the Association for …, 2020 - direct.mit.edu
Recent work has shown that pre-trained language models such as BERT improve
robustness to spurious correlations in the dataset. Intrigued by these results, we find that the …

Shortcut learning of large language models in natural language understanding

M Du, F He, N Zou, D Tao, X Hu - Communications of the ACM, 2023 - dl.acm.org
Shortcut Learning of Large Language Models in Natural Language Understanding Page 1 110
COMMUNICATIONS OF THE ACM | JANUARY 2024 | VOL. 67 | NO. 1 research IMA GE B Y …

Towards debiasing NLU models from unknown biases

PA Utama, NS Moosavi, I Gurevych - arXiv preprint arXiv:2009.12303, 2020 - arxiv.org
NLU models often exploit biases to achieve high dataset-specific performance without
properly learning the intended task. Recently proposed debiasing methods are shown to be …

Evading the simplicity bias: Training a diverse set of models discovers solutions with superior ood generalization

D Teney, E Abbasnejad, S Lucey… - Proceedings of the …, 2022 - openaccess.thecvf.com
Neural networks trained with SGD were recently shown to rely preferentially on linearly-
predictive features and can ignore complex, equally-predictive ones. This simplicity bias can …

[图书][B] Challenges in automated debiasing for toxic language detection

X Zhou - 2020 - search.proquest.com
Biased associations have been a challenge in the development of classifiers for detecting
toxic language, hindering both fairness and accuracy. As potential solutions, we investigate …

Towards interpreting and mitigating shortcut learning behavior of NLU models

M Du, V Manjunatha, R Jain, R Deshpande… - arXiv preprint arXiv …, 2021 - arxiv.org
Recent studies indicate that NLU models are prone to rely on shortcut features for prediction,
without achieving true language understanding. As a result, these models fail to generalize …

Introspective distillation for robust question answering

Y Niu, H Zhang - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Question answering (QA) models are well-known to exploit data bias, eg, the language prior
in visual QA and the position bias in reading comprehension. Recent debiasing methods …