Delivering trustworthy AI through formal XAI

J Marques-Silva, A Ignatiev - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …

On tackling explanation redundancy in decision trees

Y Izza, A Ignatiev, J Marques-Silva - Journal of Artificial Intelligence …, 2022 - jair.org
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …

On computing probabilistic explanations for decision trees

M Arenas, P Barceló, M Romero Orth… - Advances in …, 2022 - proceedings.neurips.cc
Formal XAI (explainable AI) is a growing area that focuses on computing explanations with
mathematical guarantees for the decisions made by ML models. Inside formal XAI, one of …

Logic-based explainability in machine learning

J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …

Explanations for Monotonic Classifiers.

J Marques-Silva, T Gerspacher… - International …, 2021 - proceedings.mlr.press
In many classification tasks there is a requirement of monotonicity. Concretely, if all else
remains constant, increasing (resp. ádecreasing) the value of one or more features must not …

Using MaxSAT for efficient explanations of tree ensembles

A Ignatiev, Y Izza, PJ Stuckey… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Tree ensembles (TEs) denote a prevalent machine learning model that do not offer
guarantees of interpretability, that represent a challenge from the perspective of explainable …

Solving explainability queries with quantification: The case of feature relevancy

X Huang, Y Izza, J Marques-Silva - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Trustable explanations of machine learning (ML) models are vital in high-risk uses of
artificial intelligence (AI). Apart from the computation of trustable explanations, a number of …

The inadequacy of Shapley values for explainability

X Huang, J Marques-Silva - arXiv preprint arXiv:2302.08160, 2023 - arxiv.org
This paper develops a rigorous argument for why the use of Shapley values in explainable
AI (XAI) will necessarily yield provably misleading information about the relative importance …

Tractable explanations for d-DNNF classifiers

X Huang, Y Izza, A Ignatiev, M Cooper… - Proceedings of the …, 2022 - ojs.aaai.org
Compilation into propositional languages finds a growing number of practical uses,
including in constraint programming, diagnosis and machine learning (ML), among others …

On the explanatory power of Boolean decision trees

G Audemard, S Bellart, L Bounia, F Koriche… - Data & Knowledge …, 2022 - Elsevier
Decision trees have long been recognized as models of choice in sensitive applications
where interpretability is of paramount importance. In this paper, we examine the …