Evidence Against Syntactic Encapsulation in Large Language Models

TA McGee, IA Blank - Proceedings of the Annual Meeting of the …, 2024 - escholarship.org
Transformer large language models (LLMs) perform exceptionally well across a variety of
linguistic tasks. These models represent relationships between words in a sentence via …

Active Use of Latent Constituency Representation in both Humans and Large Language Models

W Liu, M Xiang, N Ding - arXiv preprint arXiv:2405.18241, 2024 - arxiv.org
Understanding how sentences are internally represented in the human brain, as well as in
large language models (LLMs) such as ChatGPT, is a major challenge for cognitive science …

What Are Large Language Models Mapping to in the Brain? A Case Against Over-Reliance on Brain Scores

E Feghhi, N Hadidi, B Song, IA Blank… - arXiv preprint arXiv …, 2024 - arxiv.org
Given the remarkable capabilities of large language models (LLMs), there has been a
growing interest in evaluating their similarity to the human brain. One approach towards …

Collateral facilitation in humans and language models

JA Michaelov, BK Bergen - arXiv preprint arXiv:2211.05198, 2022 - arxiv.org
Are the predictions of humans and language models affected by similar things? Research
suggests that while comprehending language, humans make predictions about upcoming …

Do Large language Models know who did what to whom?

J Denning, XH Guo, B Snefjella… - Proceedings of the Annual …, 2023 - escholarship.org
Large Language Models (LLMs), which match or exceed human performance on many
linguistic tasks, are nonetheless commonly criticized for not “understanding” language …

What Makes Language Models Good-enough?

D Asami, S Sugawara - arXiv preprint arXiv:2406.03666, 2024 - arxiv.org
Psycholinguistic research suggests that humans may build a representation of linguistic
input that is' good-enough'for the task at hand. This study examines what architectural …

Reconstructing the cascade of language processing in the brain using the internal computations of a transformer-based language model

S Kumar, TR Sumers, T Yamakoshi, A Goldstein… - BioRxiv, 2022 - biorxiv.org
Piecing together the meaning of a narrative requires understanding not only the individual
words but also the intricate relationships between them. How does the brain construct this …

Neural Generative Models and the Parallel Architecture of Language: A Critical Review and Outlook

G Rambelli, E Chersoni, D Testa… - Topics in cognitive …, 2024 - Wiley Online Library
According to the parallel architecture, syntactic and semantic information processing are two
separate streams that interact selectively during language comprehension. While …

[PDF][PDF] Shared functional specialization in transformer-based language models and the human brain

S Kumar, TR Sumers, T Yamakoshi, A Goldstein… - 2022 - researchgate.net
Humans use complex linguistic structures to transmit ideas to one another. The brain is
thought to deploy specialized computations to process these structures. Recently, a new …

[图书][B] Exploring the Limits of Systematicity of Natural Language Understanding Models

K Sinha - 2022 - search.proquest.com
In this thesis, we investigate several approaches to evaluate modern neural language
models through the lens of systematicity, in order to assess their human-level reasoning and …