Algorithms to estimate Shapley value feature attributions

H Chen, IC Covert, SM Lundberg, SI Lee - Nature Machine Intelligence, 2023 - nature.com
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …

Explaining deep neural networks and beyond: A review of methods and applications

W Samek, G Montavon, S Lapuschkin… - Proceedings of the …, 2021 - ieeexplore.ieee.org
With the broader and highly successful usage of machine learning (ML) in industry and the
sciences, there has been a growing demand for explainable artificial intelligence (XAI) …

[HTML][HTML] Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset

C Meng, L Trinh, N Xu, J Enouen, Y Liu - Scientific Reports, 2022 - nature.com
The recent release of large-scale healthcare datasets has greatly propelled the research of
data-driven deep learning models for healthcare applications. However, due to the nature of …

From local explanations to global understanding with explainable AI for trees

SM Lundberg, G Erion, H Chen, A DeGrave… - Nature machine …, 2020 - nature.com
Tree-based machine learning models such as random forests, decision trees and gradient
boosted trees are popular nonlinear predictive models, yet comparatively little attention has …

The limitations of federated learning in sybil settings

C Fung, CJM Yoon, I Beschastnikh - 23rd International Symposium on …, 2020 - usenix.org
Federated learning over distributed multi-party data is an emerging paradigm that iteratively
aggregates updates from a group of devices to train a globally shared model. Relying on a …

Interpretable and explainable machine learning: a methods‐centric overview with concrete examples

R Marcinkevičs, JE Vogt - Wiley Interdisciplinary Reviews: Data …, 2023 - Wiley Online Library
Interpretability and explainability are crucial for machine learning (ML) and statistical
applications in medicine, economics, law, and natural sciences and form an essential …

Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference

K Leino, M Fredrikson - 29th USENIX security symposium (USENIX …, 2020 - usenix.org
Membership inference (MI) attacks exploit the fact that machine learning algorithms
sometimes leak information about their training data through the learned model. In this work …

Stakeholders in explainable AI

A Preece, D Harborne, D Braines, R Tomsett… - arXiv preprint arXiv …, 2018 - arxiv.org
There is general consensus that it is important for artificial intelligence (AI) and machine
learning systems to be explainable and/or interpretable. However, there is no general …

Understanding and improving recurrent networks for human activity recognition by continuous attention

M Zeng, H Gao, T Yu, OJ Mengshoel… - Proceedings of the …, 2018 - dl.acm.org
Deep neural networks, including recurrent networks, have been successfully applied to
human activity recognition. Unfortunately, the final representation learned by recurrent …

Neuron shapley: Discovering the responsible neurons

A Ghorbani, JY Zou - Advances in neural information …, 2020 - proceedings.neurips.cc
Abstract We develop Neuron Shapley as a new framework to quantify the contribution of
individual neurons to the prediction and performance of a deep network. By accounting for …