I don't understand! Evaluation methods for natural language explanations

M Clinciu, A Eshghi, H Hastie - CEUR Workshop Proceedings, 2021 - research.ed.ac.uk
CEUR Workshop Proceedings, 2021research.ed.ac.uk
Explainability of intelligent systems is key for future adoption. While much work is ongoing
with regards to developing methods of explaining complex opaque systems, there is little
current work on evaluating how effective these explanations are, in particular with respect to
the user's understanding. Natural language (NL) explanations can be seen as an intuitive
channel between humans and artificial intelligence systems, in particular for enhancing
transparency. This paper presents existing work on how evaluation methods from the field of …
Abstract
Explainability of intelligent systems is key for future adoption. While much work is ongoing with regards to developing methods of explaining complex opaque systems, there is little current work on evaluating how effective these explanations are, in particular with respect to the user’s understanding. Natural language (NL) explanations can be seen as an intuitive channel between humans and artificial intelligence systems, in particular for enhancing transparency. This paper presents existing work on how evaluation methods from the field of Natural Language Generation (NLG) can be mapped onto NL explanations. Also, we present a preliminary investigation into the relationship between linguistic features and human evaluation, using a dataset of NL explanations derived from Bayesian Networks.
research.ed.ac.uk
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References