Neural network classifiers can largely rely on simple spurious features, such as backgrounds, to make predictions. However, even in these cases, we show that they still …
Fact-checking has become increasingly important due to the speed with which both information and misinformation can spread in the modern media ecosystem. Therefore …
Deep neural networks often rely on spurious correlations to make predictions, which hinders generalization beyond training environments. For instance, models that associate cats with …
X Wang, H Wang, D Yang - arXiv preprint arXiv:2112.08313, 2021 - arxiv.org
As NLP models achieved state-of-the-art performances over benchmarks and gained wide applications, it has been increasingly important to ensure the safe deployment of these …
Shortcut Learning of Large Language Models in Natural Language Understanding Page 1 110 COMMUNICATIONS OF THE ACM | JANUARY 2024 | VOL. 67 | NO. 1 research IMA GE B Y …
Neural networks trained with SGD were recently shown to rely preferentially on linearly- predictive features and can ignore complex, equally-predictive ones. This simplicity bias can …
Existing methods for isolating hard subpopulations and spurious correlations in datasets often require human intervention. This can make these methods labor-intensive and dataset …
Recent studies indicate that NLU models are prone to rely on shortcut features for prediction, without achieving true language understanding. As a result, these models fail to generalize …
Z Xu, K Peng, L Ding, D Tao, X Lu - arXiv preprint arXiv:2403.09963, 2024 - arxiv.org
Recent research shows that pre-trained language models (PLMs) suffer from" prompt bias" in factual knowledge extraction, ie, prompts tend to introduce biases toward specific labels …