Explaining deep neural networks and beyond: A review of methods and applications

W Samek, G Montavon, S Lapuschkin… - Proceedings of the …, 2021 - ieeexplore.ieee.org
With the broader and highly successful usage of machine learning (ML) in industry and the
sciences, there has been a growing demand for explainable artificial intelligence (XAI) …

XAI systems evaluation: A review of human and computer-centred methods

P Lopes, E Silva, C Braga, T Oliveira, L Rosado - Applied Sciences, 2022 - mdpi.com
The lack of transparency of powerful Machine Learning systems paired with their growth in
popularity over the last decade led to the emergence of the eXplainable Artificial Intelligence …

From attribution maps to human-understandable explanations through concept relevance propagation

R Achtibat, M Dreyer, I Eisenbraun, S Bosse… - Nature Machine …, 2023 - nature.com
The field of explainable artificial intelligence (XAI) aims to bring transparency to today's
powerful but opaque deep learning models. While local XAI methods explain individual …

Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond

A Hedström, L Weber, D Krakowczyk, D Bareeva… - Journal of Machine …, 2023 - jmlr.org
The evaluation of explanation methods is a research topic that has not yet been explored
deeply, however, since explainability is supposed to strengthen trust in artificial intelligence …

Explainable AI methods-a brief overview

A Holzinger, A Saranti, C Molnar, P Biecek… - … workshop on extending …, 2022 - Springer
Abstract Explainable Artificial Intelligence (xAI) is an established field with a vibrant
community that has developed a variety of very successful approaches to explain and …

Debugging tests for model explanations

J Adebayo, M Muelly, I Liccardi, B Kim - arXiv preprint arXiv:2011.05429, 2020 - arxiv.org
We investigate whether post-hoc model explanations are effective for diagnosing model
errors--model debugging. In response to the challenge of explaining a model's prediction, a …

Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience

A Mamalakis, EA Barnes… - Artificial Intelligence for …, 2022 - journals.ametsoc.org
Convolutional neural networks (CNNs) have recently attracted great attention in geoscience
because of their ability to capture nonlinear system behavior and extract predictive …

A survey on the interpretability of deep learning in medical diagnosis

Q Teng, Z Liu, Y Song, K Han, Y Lu - Multimedia Systems, 2022 - Springer
Deep learning has demonstrated remarkable performance in the medical domain, with
accuracy that rivals or even exceeds that of human experts. However, it has a significant …

Understanding the (extra-) ordinary: Validating deep model decisions with prototypical concept-based explanations

M Dreyer, R Achtibat, W Samek… - Proceedings of the …, 2024 - openaccess.thecvf.com
Ensuring both transparency and safety is critical when deploying Deep Neural Networks
(DNNs) in high-risk applications such as medicine. The field of explainable AI (XAI) has …

[HTML][HTML] CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations

L Arras, A Osman, W Samek - Information Fusion, 2022 - Elsevier
The rise of deep learning in today's applications entailed an increasing need in explaining
the model's decisions beyond prediction performances in order to foster trust and …