Dissociating language and thought in large language models

K Mahowald, AA Ivanova, IA Blank, N Kanwisher… - Trends in Cognitive …, 2024 - cell.com
Large language models (LLMs) have come closest among all models to date to mastering
human language, yet opinions about their linguistic and cognitive capabilities remain split …

Symbols and grounding in large language models

E Pavlick - … Transactions of the Royal Society A, 2023 - royalsocietypublishing.org
Large language models (LLMs) are one of the most impressive achievements of artificial
intelligence in recent years. However, their relevance to the study of language more broadly …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

[HTML][HTML] Shared computational principles for language processing in humans and deep language models

A Goldstein, Z Zada, E Buchnik, M Schain, A Price… - Nature …, 2022 - nature.com
Departing from traditional linguistic models, advances in deep learning have resulted in a
new type of predictive (autoregressive) deep language models (DLMs). Using a self …

Reasoning or reciting? exploring the capabilities and limitations of language models through counterfactual tasks

Z Wu, L Qiu, A Ross, E Akyürek, B Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
The impressive performance of recent language models across a wide range of tasks
suggests that they possess a degree of abstract reasoning skills. Are these skills general …

[PDF][PDF] Modern language models refute Chomsky's approach to language

S Piantadosi - Lingbuzz Preprint, lingbuzz, 2023 - lingbuzz.net
The rise and success of large language models undermines virtually every strong claim for
the innateness of language that has been proposed by generative linguistics. Modern …

What artificial neural networks can tell us about human language acquisition

A Warstadt, SR Bowman - Algebraic structures in natural …, 2022 - taylorfrancis.com
Rapid progress in machine learning for natural language processing has the potential to
transform debates about how humans learn language. However, the learning environments …

Do large language models know what humans know?

S Trott, C Jones, T Chang, J Michaelov… - Cognitive …, 2023 - Wiley Online Library
Humans can attribute beliefs to others. However, it is unknown to what extent this ability
results from an innate biological endowment or from experience accrued through child …

Systematic testing of three Language Models reveals low language accuracy, absence of response stability, and a yes-response bias

V Dentella, F Günther… - Proceedings of the …, 2023 - National Acad Sciences
Humans are universally good in providing stable and accurate judgments about what forms
part of their language and what not. Large Language Models (LMs) are claimed to possess …

BabyBERTa: Learning more grammar with small-scale child-directed language

PA Huebner, E Sulem, F Cynthia… - Proceedings of the 25th …, 2021 - aclanthology.org
Transformer-based language models have taken the NLP world by storm. However, their
potential for addressing important questions in language acquisition research has been …