Delivering trustworthy AI through formal XAI

J Marques-Silva, A Ignatiev - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …

On tackling explanation redundancy in decision trees

Y Izza, A Ignatiev, J Marques-Silva - Journal of Artificial Intelligence …, 2022 - jair.org
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …

Logic-based explainability in machine learning

J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …

Explanations for Monotonic Classifiers.

J Marques-Silva, T Gerspacher… - International …, 2021 - proceedings.mlr.press
In many classification tasks there is a requirement of monotonicity. Concretely, if all else
remains constant, increasing (resp. ádecreasing) the value of one or more features must not …

Using MaxSAT for efficient explanations of tree ensembles

A Ignatiev, Y Izza, PJ Stuckey… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Tree ensembles (TEs) denote a prevalent machine learning model that do not offer
guarantees of interpretability, that represent a challenge from the perspective of explainable …

Solving explainability queries with quantification: The case of feature relevancy

X Huang, Y Izza, J Marques-Silva - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Trustable explanations of machine learning (ML) models are vital in high-risk uses of
artificial intelligence (AI). Apart from the computation of trustable explanations, a number of …

The inadequacy of Shapley values for explainability

X Huang, J Marques-Silva - arXiv preprint arXiv:2302.08160, 2023 - arxiv.org
This paper develops a rigorous argument for why the use of Shapley values in explainable
AI (XAI) will necessarily yield provably misleading information about the relative importance …

Tractable explanations for d-DNNF classifiers

X Huang, Y Izza, A Ignatiev, M Cooper… - Proceedings of the …, 2022 - ojs.aaai.org
Compilation into propositional languages finds a growing number of practical uses,
including in constraint programming, diagnosis and machine learning (ML), among others …

On efficiently explaining graph-based classifiers

X Huang, Y Izza, A Ignatiev, J Marques-Silva - arXiv preprint arXiv …, 2021 - arxiv.org
Recent work has shown that not only decision trees (DTs) may not be interpretable but also
proposed a polynomial-time algorithm for computing one PI-explanation of a DT. This paper …

Verix: Towards verified explainability of deep neural networks

M Wu, H Wu, C Barrett - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Abstract We present VeriX (Verified eXplainability), a system for producing optimal robust
explanations and generating counterfactuals along decision boundaries of machine …