作者
Meng Li, Hengyang Sun, Yanjun Huang, Hong Chen
发表日期
2024/2/9
来源
Autonomous Intelligent Systems
卷号
4
期号
1
页码范围
2
出版商
Springer Nature Singapore
简介
With the tremendous success of machine learning (ML), concerns about their black-box nature have grown. The issue of interpretability affects trust in ML systems and raises ethical concerns such as algorithmic bias. In recent years, the feature attribution explanation method based on Shapley value has become the mainstream explainable artificial intelligence approach for explaining ML models. This paper provides a comprehensive overview of Shapley value-based attribution methods. We begin by outlining the foundational theory of Shapley value rooted in cooperative game theory and discussing its desirable properties. To enhance comprehension and aid in identifying relevant algorithms, we propose a comprehensive classification framework for existing Shapley value-based feature attribution methods from three dimensions: Shapley value type, feature replacement method, and approximation method …
引用总数
学术搜索中的文章