Finding the right XAI method—a guide for the evaluation and ranking of explainable AI methods in climate science

PL Bommer, M Kretschmer, A Hedström… - … Intelligence for the …, 2024 - journals.ametsoc.org
Explainable artificial intelligence (XAI) methods shed light on the predictions of machine
learning algorithms. Several different approaches exist and have already been applied in …

Noisegrad—enhancing explanations by introducing stochasticity to model weights

K Bykov, A Hedström, S Nakajima… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Many efforts have been made for revealing the decision-making process of black-box
learning machines such as deep neural networks, resulting in useful local and global …

DORA: Exploring outlier representations in deep neural networks

K Bykov, M Deb, D Grinwald, KR Müller… - arXiv preprint arXiv …, 2022 - arxiv.org
Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal
representations. However, the concepts they learn remain opaque, a problem that becomes …

Manipulating feature visualizations with gradient slingshots

D Bareeva, MMC Höhne, A Warnecke, L Pirch… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep Neural Networks (DNNs) are capable of learning complex and versatile
representations, however, the semantic nature of the learned concepts remains unknown. A …

CoSy: Evaluating Textual Explanations of Neurons

L Kopf, PL Bommer, A Hedström, S Lapuschkin… - arXiv preprint arXiv …, 2024 - arxiv.org
A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is
the ability to explain learned concepts within their latent representations. While various …

Evaluating, Explaining, and Utilizing Model Uncertainty in High-Performing, Opaque Machine Learning Models

KE Brown - 2023 - search.proquest.com
Machine learning has made tremendous strides in the past decades at producing state-of-
the-art results in safety-critical fields such as self-driving vehicles and medicine. Current …