Interpretable machine learning for weather and climate prediction: A review

R Yang, J Hu, Z Li, J Mu, T Yu, J Xia, X Li… - Atmospheric …, 2024 - Elsevier
Advanced machine learning models have recently achieved high predictive accuracy for
weather and climate prediction. However, these complex models often lack inherent …

[HTML][HTML] Investigating spatial effects through machine learning and leveraging explainable AI for child malnutrition in Pakistan

X Zhang, M Usman, AR Irshad, M Rashid… - … International Journal of …, 2024 - mdpi.com
While socioeconomic gradients in regional health inequalities are firmly established, the
synergistic interactions between socioeconomic deprivation and climate vulnerability within …

Decoupling pixel flipping and occlusion strategy for consistent xai benchmarks

S Blücher, J Vielhaben, N Strodthoff - arXiv preprint arXiv:2401.06654, 2024 - arxiv.org
Feature removal is a central building block for eXplainable AI (XAI), both for occlusion-based
explanations (Shapley values) as well as their evaluation (pixel flipping, PF). However …

shapiq: Shapley interactions for machine learning

M Muschalik, H Baniecki, F Fumagalli… - arXiv preprint arXiv …, 2024 - arxiv.org
Originally rooted in game theory, the Shapley Value (SV) has recently become an important
tool in machine learning research. Perhaps most notably, it is used for feature attribution and …

[HTML][HTML] Comparing expert systems and their explainability through similarity

F Gwinner, C Tomitza, A Winkelmann - Decision Support Systems, 2024 - Elsevier
In our work, we propose the use of Representational Similarity Analysis (RSA) for
explainable AI (XAI) approaches to enhance the reliability of XAI-based decision support …

Benchmarking xai explanations with human-aligned evaluations

R Kazmierczak, S Azzolin, E Berthier… - arXiv preprint arXiv …, 2024 - arxiv.org
In this paper, we introduce PASTA (Perceptual Assessment System for explanaTion of
Artificial intelligence), a novel framework for a human-centric evaluation of XAI techniques in …

SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation

D Fucci, M Gaido, B Savoldi, M Negri, M Cettolo… - arXiv preprint arXiv …, 2024 - arxiv.org
Spurred by the demand for interpretable models, research on eXplainable AI for language
technologies has experienced significant growth, with feature attribution methods emerging …

Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers

L Badisa, SS Channappayya - arXiv preprint arXiv:2406.11534, 2024 - arxiv.org
The perturbation test remains the go-to evaluation approach for explanation methods in
computer vision. This evaluation method has a major drawback of test-time distribution shift …

Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective

H Xiong, X Zhang, J Chen, X Sun, Y Li, Z Sun… - arXiv preprint arXiv …, 2024 - arxiv.org
Given the complexity and lack of transparency in deep neural networks (DNNs), extensive
efforts have been made to make these systems more interpretable or explain their behaviors …

A Tale of Two Imperatives: Privacy and Explainability

S Manna, N Sett - arXiv preprint arXiv:2412.20798, 2024 - arxiv.org
Deep learning's preponderance across scientific domains has reshaped high-stakes
decision-making, making it essential to follow rigorous operational frameworks that include …