Minimalistic explanations: capturing the essence of decisions

M Schuessler, P Weiß - Extended Abstracts of the 2019 CHI Conference …, 2019 - dl.acm.org
Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing …, 2019dl.acm.org
The use of complex machine learning models can make systems opaque to users. Machine
learning research proposes the use of post-hoc explanations. However, it is unclear if they
give users insights into otherwise uninterpretable models. One minimalistic way of
explaining image classifications by a deep neural network is to show only the areas that
were decisive for the assignment of a label. In a pilot study, 20 participants looked at 14 of
such explanations generated either by a human or the LIME algorithm. For explanations of …
The use of complex machine learning models can make systems opaque to users. Machine learning research proposes the use of post-hoc explanations. However, it is unclear if they give users insights into otherwise uninterpretable models. One minimalistic way of explaining image classifications by a deep neural network is to show only the areas that were decisive for the assignment of a label. In a pilot study, 20 participants looked at 14 of such explanations generated either by a human or the LIME algorithm. For explanations of correct decisions, they identified the explained object with significantly higher accuracy (75.64 % vs. 18.52 %). We argue that this shows that explanations can be very minimalistic while retaining the essence of a decision, but the decision-making contexts that can be conveyed in this manner is limited. Finally, we found that explanations are unique to the explainer and human-generated explanations were assigned 79 % higher trust ratings. As a starting point for further studies, this work shares our first insights into quality criteria of post-hoc explanations.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果