[PDF][PDF] A Taxonomy for Human Subject Evaluation of Black-Box Explanations in XAI.

M Chromik, M Schuessler - Exss-atec@ iui, 2020 - mmi.ifi.lmu.de
Exss-atec@ iui, 2020mmi.ifi.lmu.de
The interdisciplinary field of explainable artificial intelligence (XAI) aims to foster human
understanding of black-box machine learning models through explanation methods.
However, there is no consensus among the involved disciplines regarding the evaluation of
their effectiveness-especially concerning the involvement of human subjects. For our
community, such involvement is a prerequisite for rigorous evaluation. To better understand
how researchers across the disciplines approach human subject XAI evaluation, we …
Abstract
The interdisciplinary field of explainable artificial intelligence (XAI) aims to foster human understanding of black-box machine learning models through explanation methods. However, there is no consensus among the involved disciplines regarding the evaluation of their effectiveness-especially concerning the involvement of human subjects. For our community, such involvement is a prerequisite for rigorous evaluation. To better understand how researchers across the disciplines approach human subject XAI evaluation, we propose developing a taxonomy that is iterated with a systematic literature review. Approaching them from an HCI perspective, we analyze which study designs scholar chose for different explanation goals. Based on our preliminary analysis, we present a taxonomy that provides guidance for researchers and practitioners on the design and execution of XAI evaluations. With this position paper, we put our survey approach and preliminary results up for discussion with our fellow researchers.
mmi.ifi.lmu.de
以上显示的是最相近的搜索结果。 查看全部搜索结果