S Piantadosi - Lingbuzz Preprint, lingbuzz, 2023 - lingbuzz.net
The rise and success of large language models undermines virtually every strong claim for the innateness of language that has been proposed by generative linguistics. Modern …
A possible explanation for the impressive performance of masked language model (MLM) pre-training is that such models have learned to represent the syntactic structures prevalent …
A Warstadt, SR Bowman - Algebraic structures in natural …, 2022 - taylorfrancis.com
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language. However, the learning environments …
M Ramscar - Language, Cognition and Neuroscience, 2023 - Taylor & Francis
What kind of knowledge accounts for linguistic productivity? How is it acquired? For years, debate on these questions has focused on a seemingly obscure domain: inflectional …
Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the …
A central quest of probing is to uncover how pre-trained models encode a linguistic property within their representations. An encoding, however, might be spurious-ie, the model might …
Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial attacks. However, existing attack methods either suffer from low attack success …
R Ri, Y Tsuruoka - arXiv preprint arXiv:2203.10326, 2022 - arxiv.org
We investigate what kind of structural knowledge learned in neural network encoders is transferable to processing natural language. We design artificial languages with structural …
Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language. For example, the words chopped, chef, and onion are …