A tutorial on kernel density estimation and recent advances

YC Chen - Biostatistics & Epidemiology, 2017 - Taylor & Francis
This tutorial provides a gentle introduction to kernel density estimation (KDE) and recent
advances regarding confidence bands and geometric/topological features. We begin with a …

Gaussian processes and kernel methods: A review on connections and equivalences

M Kanagawa, P Hennig, D Sejdinovic… - arXiv preprint arXiv …, 2018 - arxiv.org
This paper is an attempt to bridge the conceptual gaps between researchers working on the
two widely used approaches based on positive definite kernels: Bayesian learning or …

Cutpaste: Self-supervised learning for anomaly detection and localization

CL Li, K Sohn, J Yoon, T Pfister - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We aim at constructing a high performance model for defect detection that detects unknown
anomalous patterns of an image without anomalous data. To this end, we propose a two …

Deep learning: a statistical viewpoint

PL Bartlett, A Montanari, A Rakhlin - Acta numerica, 2021 - cambridge.org
The remarkable practical success of deep learning has revealed some major surprises from
a theoretical perspective. In particular, simple gradient methods easily find near-optimal …

Distribution matching for crowd counting

B Wang, H Liu, D Samaras… - Advances in neural …, 2020 - proceedings.neurips.cc
In crowd counting, each training image contains multiple people, where each person is
annotated by a dot. Existing crowd counting methods need to use a Gaussian to smooth …

Diffusion models are minimax optimal distribution estimators

K Oko, S Akiyama, T Suzuki - International Conference on …, 2023 - proceedings.mlr.press
While efficient distribution learning is no doubt behind the groundbreaking success of
diffusion modeling, its theoretical guarantees are quite limited. In this paper, we provide the …

A theoretical analysis of deep Q-learning

J Fan, Z Wang, Y Xie, Z Yang - Learning for dynamics and …, 2020 - proceedings.mlr.press
Despite the great empirical success of deep reinforcement learning, its theoretical
foundation is less well understood. In this work, we make the first attempt to theoretically …

Learning and evaluating representations for deep one-class classification

K Sohn, CL Li, J Yoon, M Jin, T Pfister - arXiv preprint arXiv:2011.02578, 2020 - arxiv.org
We present a two-stage framework for deep one-class classification. We first learn self-
supervised representations from one-class data, and then build one-class classifiers on …

[图书][B] Bandit algorithms

T Lattimore, C Szepesvári - 2020 - books.google.com
Decision-making in the face of uncertainty is a significant challenge in machine learning,
and the multi-armed bandit model is a commonly used framework to address it. This …

Exponentially tighter bounds on limitations of quantum error mitigation

Y Quek, D Stilck França, S Khatri, JJ Meyer, J Eisert - Nature Physics, 2024 - nature.com
Quantum error mitigation has been proposed as a means to combat unwanted and
unavoidable errors in near-term quantum computing without the heavy resource overheads …