Inductive biases for deep learning of higher-level cognition

A Goyal, Y Bengio - Proceedings of the Royal Society A, 2022 - royalsocietypublishing.org
A fascinating hypothesis is that human and animal intelligence could be explained by a few
principles (rather than an encyclopaedic list of heuristics). If that hypothesis was correct, we …

State representation learning for control: An overview

T Lesort, N Díaz-Rodríguez, JF Goudou, D Filliat - Neural Networks, 2018 - Elsevier
Abstract Representation learning algorithms are designed to learn abstract features that
characterize data. State representation learning (SRL) focuses on a particular kind of …

On the binding problem in artificial neural networks

K Greff, S Van Steenkiste, J Schmidhuber - arXiv preprint arXiv …, 2020 - arxiv.org
Contemporary neural networks still fall short of human-level generalization, which extends
far beyond our direct experiences. In this paper, we argue that the underlying cause for this …

Learning deep representations by mutual information estimation and maximization

RD Hjelm, A Fedorov, S Lavoie-Marchildon… - arXiv preprint arXiv …, 2018 - arxiv.org
In this work, we perform unsupervised learning of representations by maximizing mutual
information between an input and the output of a deep neural network encoder. Importantly …

Isolating sources of disentanglement in variational autoencoders

RTQ Chen, X Li, RB Grosse… - Advances in neural …, 2018 - proceedings.neurips.cc
We decompose the evidence lower bound to show the existence of a term measuring the
total correlation between latent variables. We use this to motivate the beta-TCVAE (Total …

Visual reinforcement learning with imagined goals

AV Nair, V Pong, M Dalal, S Bahl… - Advances in neural …, 2018 - proceedings.neurips.cc
For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be
able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to …

Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA

S Lachapelle, P Rodriguez, Y Sharma… - … on Causal Learning …, 2022 - proceedings.mlr.press
This work introduces a novel principle we call disentanglement via mechanism sparsity
regularization, which can be applied when the latent factors of interest depend sparsely on …

Learning neuro-symbolic skills for bilevel planning

T Silver, A Athalye, JB Tenenbaum… - arXiv preprint arXiv …, 2022 - arxiv.org
Decision-making is challenging in robotics environments with continuous object-centric
states, continuous actions, long horizons, and sparse feedback. Hierarchical approaches …

[HTML][HTML] Learning disentangled representations in the imaging domain

X Liu, P Sanchez, S Thermos, AQ O'Neil… - Medical Image …, 2022 - Elsevier
Disentangled representation learning has been proposed as an approach to learning
general representations even in the absence of, or with limited, supervision. A good general …

Measuring the tendency of cnns to learn surface statistical regularities

J Jo, Y Bengio - arXiv preprint arXiv:1711.11561, 2017 - arxiv.org
Deep CNNs are known to exhibit the following peculiarity: on the one hand they generalize
extremely well to a test set, while on the other hand they are extremely sensitive to so-called …