Explaining the behavior of remote robots to humans: an agent-based approach

Y Mualla - 2020 - theses.hal.science
2020theses.hal.science
With the widespread use of Artificial Intelligence (AI) systems, understanding the behavior of
intelligent agents and robots is crucial to guarantee smooth human-agent collaboration
since it is not straightforward for humans to understand the agent's state of mind. Recent
studies in the goal-driven Explainable AI (XAI) domain have confirmed that explaining the
agent's behavior to humans fosters the latter's understandability of the agent and increases
its acceptability. However, providing overwhelming or unnecessary information may also …
With the widespread use of Artificial Intelligence (AI) systems, understanding the behavior of intelligent agents and robots is crucial to guarantee smooth human-agent collaboration since it is not straightforward for humans to understand the agent’s state of mind. Recent studies in the goal-driven Explainable AI (XAI) domain have confirmed that explaining the agent’s behavior to humans fosters the latter’s understandability of the agent and increases its acceptability. However, providing overwhelming or unnecessary information may also confuse human users and cause misunderstandings. For these reasons, the parsimony of explanations has been outlined as one of the key features facilitating successful human-agent interaction with a parsimonious explanation defined as the simplest explanation that describes the situation adequately. While the parsimony of explanations is receiving growing attention in the literature, most of the works are carried out only conceptually.This thesis proposes, using a rigorous research methodology, a mechanism for parsimonious XAI that strikes a balance between simplicity and adequacy. In particular, it introduces a context-aware and adaptive process of explanation formulation and proposes a Human-Agent Explainability Architecture (HAExA) allowing to make this process operational for remote robots represented as Belief-Desire-Intention agents. To provide parsimonious explanations, HAExA relies first on generating normal and contrastive explanations and second on updating and filtering them before communicating them to the human.To evaluate the proposed architecture, we design and conduct empirical human-computer interaction studies employing agent-based simulation. The studies rely on well-established XAI metrics to estimate how understood and satisfactory the explanations provided by HAExA are. The results are properly analyzed and validated using parametric and non-parametric statistical testing.
theses.hal.science
以上显示的是最相近的搜索结果。 查看全部搜索结果