Beyond manual tuning of hyperparameters

F Hutter, J Lücke, L Schmidt-Thieme - KI-Künstliche Intelligenz, 2015 - Springer
The success of hand-crafted machine learning systems in many applications raises the
question of making machine learning algorithms more autonomous, ie, to reduce the …

RIS-aided localization under pixel failures

C Ozturk, MF Keskin, V Sciancalepore… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Reconfigurable intelligent surfaces (RISs) hold great potential as one of the key
technological enablers for beyond-5G wireless networks, improving localization and …

A probabilistic framework for deep learning

AB Patel, MT Nguyen… - Advances in neural …, 2016 - proceedings.neurips.cc
We develop a probabilistic framework for deep learning based on the Deep Rendering
Mixture Model (DRMM), a new generative probabilistic model that explicitly capture …

k-means as a variational EM approximation of Gaussian mixture models

J Lücke, D Forster - Pattern Recognition Letters, 2019 - Elsevier
We show that k-means (Lloyd's algorithm) is obtained as a special case when truncated
variational EM approximations are applied to Gaussian mixture models (GMM) with isotropic …

Bayesian K-SVD using fast variational inference

JG Serra, M Testa, R Molina… - IEEE Transactions on …, 2017 - ieeexplore.ieee.org
Recent work in signal processing in general and image processing in particular deals with
sparse representation related problems. Two such problems are of paramount importance …

Evolutionary variational optimization of generative models

J Drefs, E Guiraud, J Lücke - Journal of machine learning research, 2022 - jmlr.org
We combine two popular optimization approaches to derive learning algorithms for
generative models: variational optimization and evolutionary algorithms. The combination is …

Learning sparse codes with entropy-based elbos

D Velychko, S Damm, A Fischer… - … Conference on Artificial …, 2024 - proceedings.mlr.press
Standard probabilistic sparse coding assumes a Laplace prior, a linear mapping from latents
to observables, and Gaussian observable distributions. We here derive a solely entropy …

Generic unsupervised optimization for a latent variable model with exponential family observables

H Mousavi, J Drefs, F Hirschberger, J Lücke - Journal of machine learning …, 2023 - jmlr.org
Latent variable models (LVMs) represent observed variables by parameterized functions of
latent variables. Prominent examples of LVMs for unsupervised learning are probabilistic …

Modeling emerging, evolving and fading topics using dynamic soft orthogonal NMF with sparse representation

Y Chen, H Zhang, J Wu, X Wang… - 2015 IEEE international …, 2015 - ieeexplore.ieee.org
Dynamic topic models (DTM) are of great use toanalyze the evolution of unobserved topics
of a text collectionover time. Recent years have witnessed the explosive growth ofstreaming …

Zero-shot denoising of microscopy images recorded at high-resolution limits

S Salwig, J Drefs, J Lücke - PLOS Computational Biology, 2024 - journals.plos.org
Conventional and electron microscopy visualize structures in the micrometer to nanometer
range, and such visualizations contribute decisively to our understanding of biological …