We develop the sparse VAE for unsupervised representation learning on high-dimensional data. The sparse VAE learns a set of latent factors (representations) which summarize the …
Understanding neural networks is challenging in part because of the dense, continuous nature of their hidden states. We explore whether we can train neural networks to have …
The increasing availability of structured but high dimensional data has opened new opportunities for optimization. One emerging and promising avenue is the exploration of …
Learning rich data representations from unlabeled data is a key challenge towards applying deep learning algorithms in downstream tasks. Several variants of variational autoencoders …
T Ohki, N Kunii, ZC Chao - Reviews in the Neurosciences, 2023 - degruyter.com
There has been tremendous progress in artificial neural networks (ANNs) over the past decade; however, the gap between ANNs and the biological brain as a learning device …
The focus of recent research has shifted from merely increasing the Deep Neural Networks (DNNs) performance in various tasks to DNNs, which are more interpretable to humans. The …
K Fallah, CJ Rozell - arXiv preprint arXiv:2205.03665, 2022 - arxiv.org
Sparse coding strategies have been lauded for their parsimonious representations of data that leverage low dimensional structure. However, inference of these codes typically relies …
Causal representation learning aims at identifying high-level causal variables from perceptual data. Most methods assume that all latent causal variables are captured in the …
K Sakamoto, I Sato - Advances in Neural Information …, 2022 - proceedings.neurips.cc
The lottery ticket hypothesis (LTH) has attracted attention because it can explain why over- parameterized models often show high generalization ability. It is known that when we use …