A Darwiche, A Hirth - ECAI 2020, 2020 - ebooks.iospress.nl
Recent work has shown that some common machine learning classifiers can be compiled into Boolean circuits that have the same input-output behavior. We present a theory for …
In recent years, there has been an increasing interest in exploiting logically specified background knowledge in order to obtain neural models (i) with a better performance,(ii) …
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models. The interpretability of decision trees motivates explainability approaches by so-called …
P Barceló, M Monet, J Pérez… - Advances in neural …, 2020 - proceedings.neurips.cc
In spite of several claims stating that some models are more interpretable than others--eg," linear models are more interpretable than deep neural networks"--we still lack a principled …
J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning (ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …
G Audemard, F Koriche… - … Conference on Principles …, 2020 - univ-artois.hal.science
One of the key purposes of eXplainable AI (XAI) is to develop techniques for understanding predictions made by Machine Learning (ML) models and for assessing how much reliable …
In many classification tasks there is a requirement of monotonicity. Concretely, if all else remains constant, increasing (resp. ádecreasing) the value of one or more features must not …
Recent work proposed the computation of so-called PI-explanations of Naive Bayes Classifiers (NBCs). PI-explanations are subset-minimal sets of feature-value pairs that are …
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical …