Abstract Language behaviour is complex, but neuroscientific evidence disentangles it into distinct components supported by dedicated brain areas or networks. In this Review, we …
To understand the architecture of human language, it is critical to examine diverse languages; however, most cognitive neuroscience research has focused on only a handful …
Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However …
Transformer models such as GPT generate human-like language and are predictive of human brain responses to language. Here, using functional-MRI-measured brain responses …
Two analytic traditions characterize fMRI language research. One relies on averaging activations across individuals. This approach has limitations: because of inter-individual …
Reading a sentence entails integrating the meanings of individual words to infer more complex, higher-order meaning. This highly rapid and complex human behavior is known to …
Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context …
Aside from the language-selective left-lateralized frontotemporal network, language comprehension sometimes recruits a domain-general bilateral frontoparietal network …
Abstract Representations from artificial neural network (ANN) language models have been shown to predict human brain activity in the language network. To understand what aspects …