A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This has often been framed in terms of a dichotomy between …
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build …
Children can acquire language from less than 100 million words of input. Large language models are far less data-efficient: they typically require 3 or 4 orders of magnitude more data …
V Dentella, F Günther… - Proceedings of the …, 2023 - National Acad Sciences
Humans are universally good in providing stable and accurate judgments about what forms part of their language and what not. Large Language Models (LMs) are claimed to possess …
E Yiu, E Kosoy, A Gopnik - Perspectives on Psychological …, 2023 - journals.sagepub.com
Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative …
Researchers studying the correspondences between Deep Neural Networks (DNNs) and humans often give little consideration to severe testing when drawing conclusions from …
Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context …
S Lappin - Journal of Logic, Language and Information, 2024 - Springer
The transformers that drive chatbots and other AI systems constitute large language models (LLMs). These are currently the focus of a lively discussion in both the scientific literature …
G Sartori, G Orrù - Frontiers in Psychology, 2023 - frontiersin.org
Large language models (LLMs) are demonstrating impressive performance on many reasoning and problem-solving tasks from cognitive psychology. When tested, their …