From local explanations to global understanding with explainable AI for trees

SM Lundberg, G Erion, H Chen, A DeGrave… - Nature machine …, 2020 - nature.com
Tree-based machine learning models such as random forests, decision trees and gradient
boosted trees are popular nonlinear predictive models, yet comparatively little attention has …

Explainable AI for trees: From local explanations to global understanding

SM Lundberg, G Erion, H Chen, A DeGrave… - arXiv preprint arXiv …, 2019 - arxiv.org
Tree-based machine learning models such as random forests, decision trees, and gradient
boosted trees are the most popular non-linear predictive models used in practice today, yet …

FOCUS: Flexible optimizable counterfactual explanations for tree ensembles

A Lucic, H Oosterhuis, H Haned… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Abstract Model interpretability has become an important problem in machine learning (ML)
due to the increased effect algorithmic decisions have on humans. Counterfactual …

Interpreting blackbox models via model extraction

O Bastani, C Kim, H Bastani - arXiv preprint arXiv:1705.08504, 2017 - arxiv.org
Interpretability has become incredibly important as machine learning is increasingly used to
inform consequential decisions. We propose to construct global explanations of complex …

Interpretability via model extraction

O Bastani, C Kim, H Bastani - arXiv preprint arXiv:1706.09773, 2017 - arxiv.org
The ability to interpret machine learning models has become increasingly important now that
machine learning is used to inform consequential decisions. We propose an approach …

Building more accurate decision trees with the additive tree

JM Luna, ED Gennatas, LH Ungar… - Proceedings of the …, 2019 - National Acad Sciences
The expansion of machine learning to high-stakes application domains such as medicine,
finance, and criminal justice, where making informed decisions requires clear understanding …

Optimal counterfactual explanations in tree ensembles

A Parmentier, T Vidal - International conference on machine …, 2021 - proceedings.mlr.press
Counterfactual explanations are usually generated through heuristics that are sensitive to
the search's initial conditions. The absence of guarantees of performance and robustness …

What's inside the black-box? a genetic programming method for interpreting complex machine learning models

BP Evans, B Xue, M Zhang - Proceedings of the genetic and evolutionary …, 2019 - dl.acm.org
Interpreting state-of-the-art machine learning algorithms can be difficult. For example, why
does a complex ensemble predict a particular class? Existing approaches to interpretable …

[HTML][HTML] CHIRPS: Explaining random forest classification

J Hatwell, MM Gaber, RMA Azad - Artificial Intelligence Review, 2020 - Springer
Modern machine learning methods typically produce “black box” models that are opaque to
interpretation. Yet, their demand has been increasing in the Human-in-the-Loop processes …

Model agnostic supervised local explanations

G Plumb, D Molitor… - Advances in neural …, 2018 - proceedings.neurips.cc
Abstract Model interpretability is an increasingly important component of practical machine
learning. Some of the most common forms of interpretability systems are example-based …