Dissociating language and thought in large language models

K Mahowald, AA Ivanova, IA Blank, N Kanwisher… - Trends in Cognitive …, 2024 - cell.com
Large language models (LLMs) have come closest among all models to date to mastering
human language, yet opinions about their linguistic and cognitive capabilities remain split …

Over-reliance on English hinders cognitive science

DE Blasi, J Henrich, E Adamou, D Kemmerer… - Trends in cognitive …, 2022 - cell.com
Abstract English is the dominant language in the study of human cognition and behavior: the
individuals studied by cognitive scientists, as well as most of the scientists themselves, are …

Semantic reconstruction of continuous language from non-invasive brain recordings

J Tang, A LeBel, S Jain, AG Huth - Nature Neuroscience, 2023 - nature.com
A brain–computer interface that decodes continuous language from non-invasive recordings
would have many scientific and practical applications. Currently, however, non-invasive …

High-resolution image reconstruction with latent diffusion models from human brain activity

Y Takagi, S Nishimoto - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Reconstructing visual experiences from human brain activity offers a unique way to
understand how the brain represents the world, and to interpret the connection between …

Evidence of a predictive coding hierarchy in the human brain listening to speech

C Caucheteux, A Gramfort, JR King - Nature human behaviour, 2023 - nature.com
Considerable progress has recently been made in natural language processing: deep
learning algorithms are increasingly able to generate, summarize, translate and classify …

An investigation across 45 languages and 12 language families reveals a universal language network

S Malik-Moraleda, D Ayyash, J Gallée, J Affourtit… - Nature …, 2022 - nature.com
To understand the architecture of human language, it is critical to examine diverse
languages; however, most cognitive neuroscience research has focused on only a handful …

Can language models learn from explanations in context?

AK Lampinen, I Dasgupta, SCY Chan… - arXiv preprint arXiv …, 2022 - arxiv.org
Language Models (LMs) can perform new tasks by adapting to a few in-context examples.
For humans, explanations that connect examples to task principles can improve learning …

Language models show human-like content effects on reasoning

I Dasgupta, AK Lampinen, SCY Chan… - arXiv preprint arXiv …, 2022 - arxiv.org
Abstract reasoning is a key ability for an intelligent system. Large language models (LMs)
achieve above-chance performance on abstract reasoning tasks, but exhibit many …

Brains and algorithms partially converge in natural language processing

C Caucheteux, JR King - Communications biology, 2022 - nature.com
Deep learning algorithms trained to predict masked words from large amount of text have
recently been shown to generate activations similar to those of the human brain. However …

[HTML][HTML] Using artificial neural networks to ask 'why'questions of minds and brains

N Kanwisher, M Khosla, K Dobs - Trends in Neurosciences, 2023 - cell.com
Neuroscientists have long characterized the properties and functions of the nervous system,
and are increasingly succeeding in answering how brains perform the tasks they do. But the …