[HTML][HTML] This looks more like that: Enhancing self-explaining models by prototypical relevance propagation

S Gautam, MMC Höhne, S Hansen, R Jenssen… - Pattern Recognition, 2023 - Elsevier
Current machine learning models have shown high efficiency in solving a wide variety of
real-world problems. However, their black box character poses a major challenge for the …

Information maximization perspective of orthogonal matching pursuit with applications to explainable ai

A Chattopadhyay, R Pilgrim… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Information Pursuit (IP) is a classical active testing algorithm for predicting an output
by sequentially and greedily querying the input in order of information gain. However, IP is …

Interpretable by design: Learning predictors by composing interpretable queries

A Chattopadhyay, S Slocum… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
There is a growing concern about typically opaque decision-making with high-performance
machine learning algorithms. Providing an explanation of the reasoning process in domain …

Limitations of deep learning for inverse problems on digital hardware

H Boche, A Fono, G Kutyniok - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Deep neural networks have seen tremendous success over the last years. Since the training
is performed on digital hardware, in this paper, we analyze what actually can be computed …

RELAX: Representation learning explainability

KK Wickstrøm, DJ Trosten, S Løkse, A Boubekki… - International Journal of …, 2023 - Springer
Despite the significant improvements that self-supervised representation learning has led to
when learning from unlabeled data, no methods have been developed that explain what …

Interpretability with full complexity by constraining feature information

KA Murphy, DS Bassett - arXiv preprint arXiv:2211.17264, 2022 - arxiv.org
Interpretability is a pressing issue for machine learning. Common approaches to
interpretable machine learning constrain interactions between features of the input …

SHAP-XRT: The Shapley Value Meets Conditional Independence Testing

J Teneggi, B Bharti, Y Romano, J Sulam - arXiv preprint arXiv:2207.07038, 2022 - arxiv.org
The complex nature of artificial neural networks raises concerns on their reliability,
trustworthiness, and fairness in real-world scenarios. The Shapley value--a solution concept …

An introduction to the mathematics of deep learning

G Kutyniok - European Congress of Mathematics, 2023 - content.ems.press
Despite the outstanding success of deep neural networks in real-world applications, ranging
from science to public life, most of the related research is empirically driven and a …

Towards explaining sequences of actions in multi-agent deep reinforcement learning models

PW KHAING, M Geng, B Subagdja, S Pateria, AH Tan - 2023 - ink.library.smu.edu.sg
Abstract Although Multi-agent Deep Reinforcement Learning (MADRL) has shown
promising results in solving complex real-world problems, the applicability and reliability of …

Finding NEM-U: Explaining unsupervised representation learning through neural network generated explanation masks

BL Møller, C Igel, KK Wickstrøm, J Sporring… - Forty-first International … - openreview.net
Unsupervised representation learning has become an important ingredient of today's deep
learning systems. However, only a few methods exist that explain a learned vector …