Delivering inflated explanations

Y Izza, A Ignatiev, PJ Stuckey… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
In the quest for Explainable Artificial Intelligence (XAI) one of the questions that frequently
arises given a decision made by an AI system is,``why was the decision made in this …

Logic-based explainability: past, present and future

J Marques-Silva - International Symposium on Leveraging Applications of …, 2024 - Springer
In recent years, the impact of machine learning (ML) and artificial intelligence (AI) in society
has been absolutely remarkable. This impact is expected to continue in the foreseeable …

Anytime approximate formal feature attribution

J Yu, G Farr, A Ignatiev, PJ Stuckey - arXiv preprint arXiv:2312.06973, 2023 - arxiv.org
Widespread use of artificial intelligence (AI) algorithms and machine learning (ML) models
on the one hand and a number of crucial issues pertaining to them warrant the need for …

Computing inflated explanations for boosted trees: a compilation-based approach

A Murtovi, M Schlüter, B Steffen - … Essays Dedicated to Tiziana Margaria on …, 2024 - Springer
Explaining a classification made by tree-ensembles is an inherently hard problem that is
traditionally solved approximately, without guaranteeing sufficiency or necessity. Abductive …

Efficient Contrastive Explanations on Demand

Y Izza, J Marques-Silva - arXiv preprint arXiv:2412.18262, 2024 - arxiv.org
Recent work revealed a tight connection between adversarial robustness and restricted
forms of symbolic explanations, namely distance-based (formal) explanations. This …

From SHAP Scores to Feature Importance Scores

O Letoffe, X Huang, N Asher… - arXiv preprint arXiv …, 2024 - arxiv.org
A central goal of eXplainable Artificial Intelligence (XAI) is to assign relative importance to
the features of a Machine Learning (ML) model given some prediction. The importance of …

The Explanation Game--Rekindled (Extended Version)

J Marques-Silva, X Huang, O Letoffe - arXiv preprint arXiv:2501.11429, 2025 - arxiv.org
Recent work demonstrated the existence of critical flaws in the current use of Shapley values
in explainable AI (XAI), ie the so-called SHAP scores. These flaws are significant in that the …

The Sets of Power

J Marques-Silva, C Mencía, R Mencía - arXiv preprint arXiv:2410.07867, 2024 - arxiv.org
Measures of voting power have been the subject of extensive research since the mid 1940s.
More recently, similar measures of relative importance have been studied in other domains …

CAGE: Causality-Aware Shapley Value for Global Explanations

NO Breuer, A Sauter, M Mohammadi, E Acar - World Conference on …, 2024 - Springer
Abstract As Artificial Intelligence (AI) is having more influence on our everyday lives, it
becomes important that AI-based decisions are transparent and explainable. As a …