Priors in bayesian deep learning: A review

V Fortuin - International Statistical Review, 2022 - Wiley Online Library
While the choice of prior is one of the most critical parts of the Bayesian inference workflow,
recent Bayesian deep learning models have often fallen back on vague priors, such as …

Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons

AF Psaros, X Meng, Z Zou, L Guo… - Journal of Computational …, 2023 - Elsevier
Neural networks (NNs) are currently changing the computational paradigm on how to
combine data with mathematical laws in physics and engineering in a profound way …

Bayesian deep ensembles via the neural tangent kernel

B He, B Lakshminarayanan… - Advances in neural …, 2020 - proceedings.neurips.cc
We explore the link between deep ensembles and Gaussian processes (GPs) through the
lens of the Neural Tangent Kernel (NTK): a recent development in understanding the …

Repulsive deep ensembles are bayesian

F D'Angelo, V Fortuin - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Deep ensembles have recently gained popularity in the deep learning community for their
conceptual simplicity and efficiency. However, maintaining functional diversity between …

Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling and cloud-native open-source tools

D Biderman, MR Whiteway, C Hurwitz, N Greenspan… - Nature …, 2024 - nature.com
Contemporary pose estimation methods enable precise measurements of behavior via
supervised deep learning with hand-labeled video frames. Although effective in many cases …

Masksembles for uncertainty estimation

N Durasov, T Bagautdinov… - Proceedings of the …, 2021 - openaccess.thecvf.com
Deep neural networks have amply demonstrated their prowess but estimating the reliability
of their predictions remains challenging. Deep Ensembles are widely considered as being …

Deep ensembles work, but are they necessary?

T Abe, EK Buchanan, G Pleiss… - Advances in …, 2022 - proceedings.neurips.cc
Ensembling neural networks is an effective way to increase accuracy, and can often match
the performance of individual larger models. This observation poses a natural question …

Anti-exploration by random network distillation

A Nikulin, V Kurenkov, D Tarasov… - … on Machine Learning, 2023 - proceedings.mlr.press
Despite the success of Random Network Distillation (RND) in various domains, it was shown
as not discriminative enough to be used as an uncertainty estimator for penalizing out-of …

Exploration in deep reinforcement learning: From single-agent to multiagent domain

J Hao, T Yang, H Tang, C Bai, J Liu… - … on Neural Networks …, 2023 - ieeexplore.ieee.org
Deep reinforcement learning (DRL) and deep multiagent reinforcement learning (MARL)
have achieved significant success across a wide range of domains, including game artificial …

Scalable uncertainty quantification for deep operator networks using randomized priors

Y Yang, G Kissas, P Perdikaris - Computer Methods in Applied Mechanics …, 2022 - Elsevier
We present a simple and effective approach for posterior uncertainty quantification in deep
operator networks (DeepONets); an emerging paradigm for supervised learning in function …