Orthogonal jacobian regularization for unsupervised disentanglement in image generation

Y Wei, Y Shi, X Liu, Z Ji, Y Gao… - Proceedings of the …, 2021 - openaccess.thecvf.com
Unsupervised disentanglement learning is a crucial issue for understanding and exploiting
deep generative models. Recently, SeFa tries to find latent disentangled directions by …

Tutorial on amortized optimization

B Amos - Foundations and Trends® in Machine Learning, 2023 - nowpublishers.com
Optimization is a ubiquitous modeling tool and is often deployed in settings which
repeatedly solve similar instances of the same problem. Amortized optimization methods …

Learning latent representations across multiple data domains using lifelong VAEGAN

F Ye, AG Bors - Computer Vision–ECCV 2020: 16th European …, 2020 - Springer
The problem of catastrophic forgetting occurs in deep learning models trained on multiple
databases in a sequential manner. Recently, generative replay mechanisms (GRM) have …

Multi-view representation learning via total correlation objective

HJ Hwang, GH Kim, S Hong… - Advances in Neural …, 2021 - proceedings.neurips.cc
Abstract Multi-View Representation Learning (MVRL) aims to discover a shared
representation of observations from different views with the complex underlying correlation …

Flow factorized representation learning

Y Song, A Keller, N Sebe… - Advances in Neural …, 2023 - proceedings.neurips.cc
A prominent goal of representation learning research is to achieve representations which
are factorized in a useful manner with respect to the ground truth factors of variation. The …

Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans

Z Lin, K Thekumparampil, G Fanti… - … conference on machine …, 2020 - proceedings.mlr.press
Disentangled generative models map a latent code vector to a target space, while enforcing
that a subset of the learned latent codes are interpretable and associated with distinct …

Variational interaction information maximization for cross-domain disentanglement

HJ Hwang, GH Kim, S Hong… - Advances in Neural …, 2020 - proceedings.neurips.cc
Cross-domain disentanglement is the problem of learning representations partitioned into
domain-invariant and domain-specific representations, which is a key to successful domain …

Lost in Latent Space: Examining failures of disentangled models at combinatorial generalisation

M Montero, J Bowers, R Ponte Costa… - Advances in …, 2022 - proceedings.neurips.cc
Recent research has shown that generative models with highly disentangled
representations fail to generalise to unseen combination of generative factor values. These …

Where and what? examining interpretable disentangled representations

X Zhu, C Xu, D Tao - … of the IEEE/CVF Conference on …, 2021 - openaccess.thecvf.com
Capturing interpretable variations has long been one of the goals in disentanglement
learning. However, unlike the independence assumption, interpretability has rarely been …

Learning joint latent representations based on information maximization

F Ye, AG Bors - Information Sciences, 2021 - Elsevier
Learning disentangled and interpretable representations is an important aspect of
information understanding. In this paper, we propose a novel deep learning model …