New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the …
Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the …
ACM: Digital Library: Communications of the ACM ACM Digital Library Communications of the ACM Volume 67, Number 4 (2024), Pages 50-59 The Science of Detecting LLM-Generated Text …
N Kim, T Linzen - Proceedings of the 2020 conference on …, 2020 - aclanthology.org
Natural language is characterized by compositionality: the meaning of a complex expression is constructed from the meanings of its constituent parts. To facilitate the evaluation of the …
We introduce CM3, a family of causally masked generative models trained over a large corpus of structured multi-modal documents that can contain both text and image tokens …
INTRODUCTION Music is often assumed to be a human universal, emerging from an evolutionary adaptation specific to music and/or a by-product of adaptations for affect …
Transformer neural networks can exhibit a surprising capacity for in-context learning (ICL), despite not being explicitly trained for it. Prior work has provided a deeper understanding of …
Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating …
Background Unstructured text, including medical records, patient feedback, and social media comments, can be a rich source of data for clinical research. Natural language …