Learnable latent embeddings for joint behavioural and neural analysis

S Schneider, JH Lee, MW Mathis - Nature, 2023 - nature.com
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our
ability to record large neural and behavioural data increases, there is growing interest in …

Score-based generative modeling through stochastic differential equations

Y Song, J Sohl-Dickstein, DP Kingma, A Kumar… - arXiv preprint arXiv …, 2020 - arxiv.org
Creating noise from data is easy; creating data from noise is generative modeling. We
present a stochastic differential equation (SDE) that smoothly transforms a complex data …

Self-supervised learning with data augmentations provably isolates content from style

J Von Kügelgen, Y Sharma, L Gresele… - Advances in neural …, 2021 - proceedings.neurips.cc
Self-supervised representation learning has shown remarkable success in a number of
domains. A common practice is to perform data augmentation via hand-crafted …

Contrastive learning inverts the data generating process

RS Zimmermann, Y Sharma… - International …, 2021 - proceedings.mlr.press
Contrastive learning has recently seen tremendous success in self-supervised learning. So
far, however, it is largely unclear why the learned representations generalize so effectively to …

The emergence of reproducibility and consistency in diffusion models

H Zhang, J Zhou, Y Lu, M Guo, P Wang… - Forty-first International …, 2023 - openreview.net
In this work, we investigate an intriguing and prevalent phenomenon of diffusion models
which we term as" consistent model reproducibility'': given the same starting noise input and …

Nonparametric identifiability of causal representations from unknown interventions

J von Kügelgen, M Besserve… - Advances in …, 2024 - proceedings.neurips.cc
We study causal representation learning, the task of inferring latent causal variables and
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …

Learning linear causal representations from interventions under general nonlinear mixing

S Buchholz, G Rajendran… - Advances in …, 2024 - proceedings.neurips.cc
We study the problem of learning causal representations from unknown, latent interventions
in a general setting, where the latent distribution is Gaussian but the mixing function is …

Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA

S Lachapelle, P Rodriguez, Y Sharma… - … on Causal Learning …, 2022 - proceedings.mlr.press
This work introduces a novel principle we call disentanglement via mechanism sparsity
regularization, which can be applied when the latent factors of interest depend sparsely on …

Generalized shape metrics on neural representations

AH Williams, E Kunz, S Kornblith… - Advances in Neural …, 2021 - proceedings.neurips.cc
Understanding the operation of biological and artificial networks remains a difficult and
important challenge. To identify general principles, researchers are increasingly interested …

Unsupervised learning of compositional energy concepts

Y Du, S Li, Y Sharma, J Tenenbaum… - Advances in Neural …, 2021 - proceedings.neurips.cc
Humans are able to rapidly understand scenes by utilizing concepts extracted from prior
experience. Such concepts are diverse, and include global scene descriptors, such as the …