Semantic exploration from language abstractions and pretrained representations

A Tam, N Rabinowitz, A Lampinen… - Advances in neural …, 2022 - proceedings.neurips.cc
Effective exploration is a challenge in reinforcement learning (RL). Novelty-based
exploration methods can suffer in high-dimensional state spaces, such as continuous …

Alignment with human representations supports robust few-shot learning

I Sucholutsky, T Griffiths - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Should we care whether AI systems have representations of the world that are similar to
those of humans? We provide an information-theoretic analysis that suggests that there …

Getting aligned on representational alignment

I Sucholutsky, L Muttenthaler, A Weller, A Peng… - arXiv preprint arXiv …, 2023 - arxiv.org
Biological and artificial information processing systems form representations of the world
that they can use to categorize, reason, plan, navigate, and make decisions. To what extent …

Comparing color similarity structures between humans and LLMs via unsupervised alignment

G Kawakita, A Zeleznikow-Johnston… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs), such as the General Pre-trained Transformer (GPT), have
shown remarkable performance in various cognitive tasks. However, it remains unclear …

On the informativeness of supervision signals

I Sucholutsky, RM Battleday… - Uncertainty in …, 2023 - proceedings.mlr.press
Supervised learning typically focuses on learning transferable representations from training
examples annotated by humans. While rich annotations (like soft labels) carry more …

Large language models predict human sensory judgments across six modalities

R Marjieh, I Sucholutsky, P van Rijn, N Jacoby… - arXiv preprint arXiv …, 2023 - arxiv.org
Determining the extent to which the perceptual world can be recovered from language is a
longstanding problem in philosophy and cognitive science. We show that state-of-the-art …

Conceptual structure coheres in human cognition but not in large language models

S Suresh, K Mukherjee, X Yu, WC Huang… - arXiv preprint arXiv …, 2023 - arxiv.org
Neural network models of language have long been used as a tool for developing
hypotheses about conceptual representation in the mind and brain. For many years, such …

Giving robots a voice: Human-in-the-loop voice creation and open-ended labeling

P van Rijn, S Mertes, K Janowski, K Weitz… - Proceedings of the CHI …, 2024 - dl.acm.org
Speech is a natural interface for humans to interact with robots. Yet, aligning a robot's voice
to its appearance is challenging due to the rich vocabulary of both modalities. Previous …

Analyzing the roles of language and vision in learning from limited data

A Chen, I Sucholutsky, O Russakovsky… - arXiv preprint arXiv …, 2024 - arxiv.org
Does language help make sense of the visual world? How important is it to actually see the
world rather than having it described with words? These basic questions about the nature of …

Learning human-like representations to enable learning human values

A Wynn, I Sucholutsky, TL Griffiths - arXiv preprint arXiv:2312.14106, 2023 - arxiv.org
How can we build AI systems that are aligned with human values and objectives in order to
avoid causing harm or violating societal standards for acceptable behavior? Making AI …