How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?

Z Carmichael, WJ Scheirer - arXiv preprint arXiv:2310.18496, 2023 - arxiv.org
Surging interest in deep learning from high-stakes domains has precipitated concern over
the inscrutable nature of black box neural networks. Explainable AI (XAI) research has led to …

DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation

Y Wu, M Keoliya, K Chen, N Velingker, Z Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Designing faithful yet accurate AI models is challenging, particularly in the field of individual
treatment effect estimation (ITE). ITE prediction models deployed in critical settings such as …

Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers

L Badisa, SS Channappayya - arXiv preprint arXiv:2406.11534, 2024 - arxiv.org
The perturbation test remains the go-to evaluation approach for explanation methods in
computer vision. This evaluation method has a major drawback of test-time distribution shift …

An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare Records

J Edin, M Maistro, L Maaløe, L Borgholt… - arXiv preprint arXiv …, 2024 - arxiv.org
Electronic healthcare records are vital for patient safety as they document conditions, plans,
and procedures in both free text and medical codes. Language models have significantly …

[PDF][PDF] Explainable AI for High-stakes Decision-making

Z Carmichael - 2024 - curate.nd.edu
As a result of the many recent advancements in artificial intelligence (AI), a significant
interest in the technology has developed from high-stakes decision-makers in industries …

DISCRET: a self-interpretable framework for treatment effect estimation

Y Wu, N Velingker, Z Li, K Chen, M Keoliya, M Naik… - openreview.net
Individual treatment effect is of great importance for healthcare and beyond. While most
existing solutions focus on accurate treatment effect estimations, they rely on non …