While socioeconomic gradients in regional health inequalities are firmly established, the synergistic interactions between socioeconomic deprivation and climate vulnerability within …
Feature removal is a central building block for eXplainable AI (XAI), both for occlusion-based explanations (Shapley values) as well as their evaluation (pixel flipping, PF). However …
Originally rooted in game theory, the Shapley Value (SV) has recently become an important tool in machine learning research. Perhaps most notably, it is used for feature attribution and …
In our work, we propose the use of Representational Similarity Analysis (RSA) for explainable AI (XAI) approaches to enhance the reliability of XAI-based decision support …
In this paper, we introduce PASTA (Perceptual Assessment System for explanaTion of Artificial intelligence), a novel framework for a human-centric evaluation of XAI techniques in …
Spurred by the demand for interpretable models, research on eXplainable AI for language technologies has experienced significant growth, with feature attribution methods emerging …
L Badisa, SS Channappayya - arXiv preprint arXiv:2406.11534, 2024 - arxiv.org
The perturbation test remains the go-to evaluation approach for explanation methods in computer vision. This evaluation method has a major drawback of test-time distribution shift …
H Xiong, X Zhang, J Chen, X Sun, Y Li, Z Sun… - arXiv preprint arXiv …, 2024 - arxiv.org
Given the complexity and lack of transparency in deep neural networks (DNNs), extensive efforts have been made to make these systems more interpretable or explain their behaviors …
S Manna, N Sett - arXiv preprint arXiv:2412.20798, 2024 - arxiv.org
Deep learning's preponderance across scientific domains has reshaped high-stakes decision-making, making it essential to follow rigorous operational frameworks that include …