DORA: Exploring outlier representations in deep neural networks

K Bykov, M Deb, D Grinwald, KR Müller… - arXiv preprint arXiv …, 2022 - arxiv.org
Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal
representations. However, the concepts they learn remain opaque, a problem that becomes …

Uncertainty in XAI: Human Perception and Modeling Approaches

T Chiaburu, F Haußer, F Bießmann - Machine Learning and Knowledge …, 2024 - mdpi.com
Artificial Intelligence (AI) plays an increasingly integral role in decision-making processes. In
order to foster trust in AI predictions, many approaches towards explainable AI (XAI) have …

XAI Unveiled: Revealing the Potential of Explainable AI in Medicine-A Systematic Review

N Scarpato, P Ferroni, F Guadagni - IEEE Access, 2024 - ieeexplore.ieee.org
Nowadays, artificial intelligence in medicine plays a leading role. This necessitates the need
to ensure that artificial intelligence systems are not only high-performing but also …

Finding Spurious Correlations with Function-Semantic Contrast Analysis

K Bykov, L Kopf, MMC Höhne - World Conference on Explainable Artificial …, 2023 - Springer
In the field of Computer Vision (CV), the degree to which two objects, eg two classes, share
a common conceptual meaning, known as semantic similarity, is closely linked to the visual …

Finding the input features that reduce the entropy of a neural network's prediction

N Amanova, J Martin, C Elster - Applied Intelligence, 2024 - Springer
In deep learning-based image classification, the entropy of a neural network's output is often
taken as a measure of its uncertainty. We introduce an explainability method that identifies …

Harnessing artificial intelligence for enhanced veterinary diagnostics: A look to quality assurance, Part I Model development

C Pacholec, B Flatland, H Xie… - Veterinary clinical …, 2024 - Wiley Online Library
Artificial intelligence (AI) has transformative potential in veterinary pathology in tasks ranging
from cell enumeration and cancer detection to prognosis forecasting, virtual staining …

Investigating the Impact of Model Instability on Explanations and Uncertainty

SV Marjanović, I Augenstein, C Lioma - arXiv preprint arXiv:2402.13006, 2024 - arxiv.org
Explainable AI methods facilitate the understanding of model behaviour, yet, small,
imperceptible perturbations to inputs can vastly distort explanations. As these explanations …

[PDF][PDF] Transparency and reliability assurance methods for safeguarding deep neural networks-a survey

E Haedecke, MA Pintz - … on Trustworthy Artificial Intelligence as a part of …, 2022 - hal.science
In light of deep neural network applications emerging in diverse fields–eg, industry,
healthcare or finance–weaknesses and failures of these models might bare unacceptable …

A Twin XCBR System Using Supportive and Contrastive Explanations

B Bayrak, K Bach - ICCBR 2023 Workshop Proceedings, 2023 - ntnuopen.ntnu.no
Machine learning models are increasingly being applied in safety-critical domains.
Therefore, ensuring their trustworthiness and reliability has become a priority. Uncertainty …

Identifying Drivers of Predictive Aleatoric Uncertainty

P Iversen, S Witzke, K Baum, BY Renard - openreview.net
Explainability and uncertainty quantification are two pillars of trustable artificial intelligence.
However, the reasoning behind uncertainty estimates is generally left unexplained …