Evaluating post-hoc explanations for graph neural networks via robustness analysis

J Fang, W Liu, Y Gao, Z Liu, A Zhang… - Advances in Neural …, 2024 - proceedings.neurips.cc
This work studies the evaluation of explaining graph neural networks (GNNs), which is
crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation …

Craft: Concept recursive activation factorization for explainability

T Fel, A Picard, L Bethune, T Boissin… - Proceedings of the …, 2023 - openaccess.thecvf.com
Attribution methods are a popular class of explainability methods that use heatmaps to
depict the most important areas of an image that drive a model decision. Nevertheless …

Connecting algorithmic research and usage contexts: a perspective of contextualized evaluation for explainable AI

QV Liao, Y Zhang, R Luss, F Doshi-Velez… - Proceedings of the …, 2022 - ojs.aaai.org
Recent years have seen a surge of interest in the field of explainable AI (XAI), with a
plethora of algorithms proposed in the literature. However, a lack of consensus on how to …

What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods

J Colin, T Fel, R Cadène… - Advances in neural …, 2022 - proceedings.neurips.cc
A multitude of explainability methods has been described to try to help users better
understand how modern AI systems make decisions. However, most performance metrics …

Faithfulness tests for natural language explanations

P Atanasova, OM Camburu, C Lioma… - arXiv preprint arXiv …, 2023 - arxiv.org
Explanations of neural models aim to reveal a model's decision-making process for its
predictions. However, recent work shows that current methods giving explanations such as …

The out-of-distribution problem in explainability and search methods for feature importance explanations

P Hase, H Xie, M Bansal - Advances in neural information …, 2021 - proceedings.neurips.cc
Feature importance (FI) estimates are a popular form of explanation, and they are commonly
created and evaluated by computing the change in model confidence caused by removing …

A holistic approach to unifying automatic concept extraction and concept importance estimation

T Fel, V Boutin, L Béthune, R Cadène… - Advances in …, 2024 - proceedings.neurips.cc
In recent years, concept-based approaches have emerged as some of the most promising
explainability methods to help us interpret the decisions of Artificial Neural Networks (ANNs) …

Don't lie to me! robust and efficient explainability with verified perturbation analysis

T Fel, M Ducoffe, D Vigouroux… - Proceedings of the …, 2023 - openaccess.thecvf.com
A variety of methods have been proposed to try to explain how deep neural networks make
their decisions. Key to those approaches is the need to sample the pixel space efficiently in …

Encoding time-series explanations through self-supervised model behavior consistency

O Queen, T Hartvigsen, T Koker, H He… - Advances in …, 2024 - proceedings.neurips.cc
Interpreting time series models is uniquely challenging because it requires identifying both
the location of time series signals that drive model predictions and their matching to an …

Xplique: A deep learning explainability toolbox

T Fel, L Hervier, D Vigouroux, A Poche… - arXiv preprint arXiv …, 2022 - arxiv.org
Today's most advanced machine-learning models are hardly scrutable. The key challenge
for explainability methods is to help assisting researchers in opening up these black boxes …