Generalized energy based models

M Arbel, L Zhou, A Gretton - arXiv preprint arXiv:2003.05033, 2020 - arxiv.org
We introduce the Generalized Energy Based Model (GEBM) for generative modelling. These
models combine two trained components: a base distribution (generally an implicit model) …

Maximum mean discrepancy gradient flow

M Arbel, A Korba, A Salim… - Advances in Neural …, 2019 - proceedings.neurips.cc
We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and
study its convergence properties. The MMD is an integral probability metric defined for a …

Out of distribution generalization in machine learning

M Arjovsky - 2020 - search.proquest.com
Abstract Machine learning has achieved tremendous success in a variety of domains in
recent years. However, a lot of these success stories have been in places where the training …

Spurious valleys in one-hidden-layer neural network optimization landscapes

L Venturi, AS Bandeira, J Bruna - Journal of Machine Learning Research, 2019 - jmlr.org
Neural networks provide a rich class of high-dimensional, non-convex optimization
problems. Despite their non-convexity, gradient-descent methods often successfully …

Scenarios modelling for forecasting day-ahead electricity prices: Case studies in Australia

X Lu, J Qiu, G Lei, J Zhu - Applied Energy, 2022 - Elsevier
Electricity prices in spot markets are volatile and can be affected by various factors, such as
generation and demand, system contingencies, local weather patterns, bidding strategies of …

On gradient regularizers for MMD GANs

M Arbel, DJ Sutherland… - Advances in neural …, 2018 - proceedings.neurips.cc
We propose a principled method for gradient-based regularization of the critic of GAN-like
models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy …

Smoothness and stability in gans

C Chu, K Minami, K Fukumizu - arXiv preprint arXiv:2002.04185, 2020 - arxiv.org
Generative adversarial networks, or GANs, commonly display unstable behavior during
training. In this work, we develop a principled theoretical framework for understanding the …

Nonparametric density estimation under adversarial losses

S Singh, A Uppal, B Li, CL Li… - Advances in Neural …, 2018 - proceedings.neurips.cc
We study minimax convergence rates of nonparametric density estimation under a large
class of loss functions called``adversarial losses'', which, besides classical L^ p losses …

Nonparametric density estimation & convergence rates for gans under besov ipm losses

A Uppal, S Singh, B Póczos - Advances in neural …, 2019 - proceedings.neurips.cc
We study the problem of estimating a nonparametric probability distribution under a family of
losses called Besov IPMs. This family is quite large, including, for example, L^ p distances …

Statistical inference for generative models with maximum mean discrepancy

FX Briol, A Barp, AB Duncan, M Girolami - arXiv preprint arXiv:1906.05944, 2019 - arxiv.org
While likelihood-based inference and its variants provide a statistically efficient and widely
applicable approach to parametric inference, their application to models involving …