Understanding metric-related pitfalls in image analysis validation

A Reinke, MD Tizabi, M Baumgartner, M Eisenmann… - Nature …, 2024 - nature.com
Validation metrics are key for tracking scientific progress and bridging the current chasm
between artificial intelligence research and its translation into practice. However, increasing …

[HTML][HTML] Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis

B Lambert, F Forbes, S Doyle, H Dehaene… - Artificial Intelligence in …, 2024 - Elsevier
The full acceptance of Deep Learning (DL) models in the clinical field is rather low with
respect to the quantity of high-performing solutions reported in the literature. End users are …

[HTML][HTML] A survey of uncertainty in deep neural networks

J Gawlikowski, CRN Tassi, M Ali, J Lee, M Humt… - Artificial Intelligence …, 2023 - Springer
Over the last decade, neural networks have reached almost every field of science and
become a crucial part of various real world applications. Due to the increasing spread …

Predictive performance of presence‐only species distribution models: a benchmark study with reproducible code

R Valavi, G Guillera‐Arroita… - Ecological …, 2022 - Wiley Online Library
Species distribution modeling (SDM) is widely used in ecology and conservation. Currently,
the most available data for SDM are species presence‐only records (available through …

Unsolved problems in ml safety

D Hendrycks, N Carlini, J Schulman… - arXiv preprint arXiv …, 2021 - arxiv.org
Machine learning (ML) systems are rapidly increasing in size, are acquiring new
capabilities, and are increasingly deployed in high-stakes settings. As with other powerful …

Uncertainty quantification over graph with conformalized graph neural networks

K Huang, Y Jin, E Candes… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Graph Neural Networks (GNNs) are powerful machine learning prediction models
on graph-structured data. However, GNNs lack rigorous uncertainty estimates, limiting their …

Dark experience for general continual learning: a strong, simple baseline

P Buzzega, M Boschini, A Porrello… - Advances in neural …, 2020 - proceedings.neurips.cc
Continual Learning has inspired a plethora of approaches and evaluation settings; however,
the majority of them overlooks the properties of a practical scenario, where the data stream …

Disentangling label distribution for long-tailed visual recognition

Y Hong, S Han, K Choi, S Seo… - Proceedings of the …, 2021 - openaccess.thecvf.com
The current evaluation protocol of long-tailed visual recognition trains the classification
model on the long-tailed source label distribution and evaluates its performance on the …

Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty

U Bhatt, J Antorán, Y Zhang, QV Liao… - Proceedings of the …, 2021 - dl.acm.org
Algorithmic transparency entails exposing system properties to various stakeholders for
purposes that include understanding, improving, and contesting predictions. Until now, most …

Calibrating deep neural networks using focal loss

J Mukhoti, V Kulharia, A Sanyal… - Advances in …, 2020 - proceedings.neurips.cc
Miscalibration--a mismatch between a model's confidence and its correctness--of Deep
Neural Networks (DNNs) makes their predictions hard to rely on. Ideally, we want networks …