We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties. The MMD is an integral probability metric defined for a …
Abstract Machine learning has achieved tremendous success in a variety of domains in recent years. However, a lot of these success stories have been in places where the training …
Neural networks provide a rich class of high-dimensional, non-convex optimization problems. Despite their non-convexity, gradient-descent methods often successfully …
X Lu, J Qiu, G Lei, J Zhu - Applied Energy, 2022 - Elsevier
Electricity prices in spot markets are volatile and can be affected by various factors, such as generation and demand, system contingencies, local weather patterns, bidding strategies of …
We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy …
Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the …
We study minimax convergence rates of nonparametric density estimation under a large class of loss functions called``adversarial losses'', which, besides classical L^ p losses …
We study the problem of estimating a nonparametric probability distribution under a family of losses called Besov IPMs. This family is quite large, including, for example, L^ p distances …
While likelihood-based inference and its variants provide a statistically efficient and widely applicable approach to parametric inference, their application to models involving …