Counterfactual explanations and how to find them: literature review and benchmarking

R Guidotti - Data Mining and Knowledge Discovery, 2024 - Springer
Interpretable machine learning aims at unveiling the reasons behind predictions returned by
uninterpretable classifiers. One of the most valuable types of explanation consists of …

A survey of algorithmic recourse: contrastive explanations and consequential recommendations

AH Karimi, G Barthe, B Schölkopf, I Valera - ACM Computing Surveys, 2022 - dl.acm.org
Machine learning is increasingly used to inform decision making in sensitive situations
where decisions have consequential effects on individuals' lives. In these settings, in …

A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence

I Stepin, JM Alonso, A Catala, M Pereira-Fariña - IEEE Access, 2021 - ieeexplore.ieee.org
A number of algorithms in the field of artificial intelligence offer poorly interpretable
decisions. To disclose the reasoning behind such algorithms, their output can be explained …

[HTML][HTML] Evaluating XAI: A comparison of rule-based and example-based explanations

J van der Waa, E Nieuwburg, A Cremers, M Neerincx - Artificial intelligence, 2021 - Elsevier
Abstract Current developments in Artificial Intelligence (AI) led to a resurgence of
Explainable AI (XAI). New methods are being researched to obtain information from AI …

FACE: feasible and actionable counterfactual explanations

R Poyiadzi, K Sokol, R Santos-Rodriguez… - Proceedings of the …, 2020 - dl.acm.org
Work in Counterfactual Explanations tends to focus on the principle of" the closest possible
world" that identifies small changes leading to the desired outcome. In this paper we argue …

[HTML][HTML] Human-centered XAI: Developing design patterns for explanations of clinical decision support systems

TAJ Schoonderwoerd, W Jorritsma, MA Neerincx… - International Journal of …, 2021 - Elsevier
Much of the research on eXplainable Artificial Intelligence (XAI) has centered on providing
transparency of machine learning models. More recently, the focus on human-centered …

Towards a theory of longitudinal trust calibration in human–robot teams

EJ De Visser, MMM Peeters, MF Jung, S Kohn… - International journal of …, 2020 - Springer
The introduction of artificial teammates in the form of autonomous social robots, with fewer
social abilities compared to humans, presents new challenges for human–robot team …

On completeness-aware concept-based explanations in deep neural networks

CK Yeh, B Kim, S Arik, CL Li… - Advances in neural …, 2020 - proceedings.neurips.cc
Human explanations of high-level decisions are often expressed in terms of key concepts
the decisions are based on. In this paper, we study such concept-based explainability for …

A survey of algorithmic recourse: definitions, formulations, solutions, and prospects

AH Karimi, G Barthe, B Schölkopf, I Valera - arXiv preprint arXiv …, 2020 - arxiv.org
Machine learning is increasingly used to inform decision-making in sensitive situations
where decisions have consequential effects on individuals' lives. In these settings, in …

Do people engage cognitively with AI? Impact of AI assistance on incidental learning

KZ Gajos, L Mamykina - … of the 27th International Conference on …, 2022 - dl.acm.org
When people receive advice while making difficult decisions, they often make better
decisions in the moment and also increase their knowledge in the process. However, such …