Comprehensive exploration of synthetic data generation: A survey

A Bauer, S Trapp, M Stenger, R Leppich… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent years have witnessed a surge in the popularity of Machine Learning (ML), applied
across diverse domains. However, progress is impeded by the scarcity of training data due …

Identifiable deep generative models via sparse decoding

GE Moran, D Sridhar, Y Wang, DM Blei - arXiv preprint arXiv:2110.10804, 2021 - arxiv.org
We develop the sparse VAE for unsupervised representation learning on high-dimensional
data. The sparse VAE learns a set of latent factors (representations) which summarize the …

Codebook features: Sparse and discrete interpretability for neural networks

A Tamkin, M Taufeeque, ND Goodman - arXiv preprint arXiv:2310.17230, 2023 - arxiv.org
Understanding neural networks is challenging in part because of the dense, continuous
nature of their hidden states. We explore whether we can train neural networks to have …

Good practices for Bayesian optimization of high dimensional structured spaces

E Siivola, A Paleyes, J González, A Vehtari - Applied AI Letters, 2021 - Wiley Online Library
The increasing availability of structured but high dimensional data has opened new
opportunities for optimization. One emerging and promising avenue is the exploration of …

[HTML][HTML] SC-VAE: Sparse coding-based variational autoencoder with learned ISTA

P Xiao, P Qiu, SM Ha, A Bani, S Zhou, A Sotiras - Pattern Recognition, 2025 - Elsevier
Learning rich data representations from unlabeled data is a key challenge towards applying
deep learning algorithms in downstream tasks. Several variants of variational autoencoders …

Efficient, continual, and generalized learning in the brain–neural mechanism of Mental Schema 2.0–

T Ohki, N Kunii, ZC Chao - Reviews in the Neurosciences, 2023 - degruyter.com
There has been tremendous progress in artificial neural networks (ANNs) over the past
decade; however, the gap between ANNs and the biological brain as a learning device …

A survey on Concept-based Approaches For Model Improvement

A Gupta, PJ Narayanan - arXiv preprint arXiv:2403.14566, 2024 - arxiv.org
The focus of recent research has shifted from merely increasing the Deep Neural Networks
(DNNs) performance in various tasks to DNNs, which are more interpretable to humans. The …

Variational sparse coding with learned thresholding

K Fallah, CJ Rozell - arXiv preprint arXiv:2205.03665, 2022 - arxiv.org
Sparse coding strategies have been lauded for their parsimonious representations of data
that leverage low dimensional structure. However, inference of these codes typically relies …

A sparsity principle for partially observable causal representation learning

D Xu, D Yao, S Lachapelle, P Taslakian… - arXiv preprint arXiv …, 2024 - arxiv.org
Causal representation learning aims at identifying high-level causal variables from
perceptual data. Most methods assume that all latent causal variables are captured in the …

Analyzing lottery ticket hypothesis from PAC-bayesian theory perspective

K Sakamoto, I Sato - Advances in Neural Information …, 2022 - proceedings.neurips.cc
The lottery ticket hypothesis (LTH) has attracted attention because it can explain why over-
parameterized models often show high generalization ability. It is known that when we use …