XAI systems evaluation: A review of human and computer-centred methods

P Lopes, E Silva, C Braga, T Oliveira, L Rosado - Applied Sciences, 2022 - mdpi.com
The lack of transparency of powerful Machine Learning systems paired with their growth in
popularity over the last decade led to the emergence of the eXplainable Artificial Intelligence …

Integrating transparency, trust, and acceptance: The intelligent systems technology acceptance model (ISTAM)

ES Vorm, DJY Combs - International Journal of Human–Computer …, 2022 - Taylor & Francis
Intelligent systems such as technologies related to artificial intelligence, robotics, machine
learning, etc. open new insights into data and expand the concept of work in myriad …

Visual correspondence-based explanations improve AI robustness and human-AI team accuracy

MR Taesiri, G Nguyen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Explaining artificial intelligence (AI) predictions is increasingly important and even
imperative in many high-stake applications where humans are the ultimate decision-makers …

A systematic review of functions and design features of in-vehicle agents

SC Lee, M Jeon - International Journal of Human-Computer Studies, 2022 - Elsevier
Intelligent agents are expected to become major computing systems in the near future. In-
vehicle agents (IVAs) can be widely adopted and utilized in a driving environment. Despite …

Toward adaptive driving styles for automated driving with users' trust and preferences

M Natarajan, K Akash, T Misu - 2022 17th ACM/IEEE …, 2022 - ieeexplore.ieee.org
As autonomous vehicles (AVs) become ubiquitous, users' trust will be critical for the
successful adoption of such systems. Prior works have shown that the driving styles of AVs …

Examining the effects of power status of an explainable artificial intelligence system on users' perceptions

T Ha, YJ Sah, Y Park, S Lee - Behaviour & Information Technology, 2022 - Taylor & Francis
Contrary to the traditional concept of artificial intelligence, explainable artificial intelligence
(XAI) aims to provide explanations for the prediction results and make users perceive the …

Evaluating effects of enhanced autonomy transparency on trust, dependence, and human-autonomy team performance over time

R Luo, N Du, XJ Yang - International Journal of Human–Computer …, 2022 - Taylor & Francis
As autonomous systems become more complicated, humans may have difficulty deciphering
autonomy-generated solutions and increasingly perceive autonomy as a mysterious black …

Effect of multiple monitoring requests on vigilance and readiness by measuring eye movement and takeover performance

L Xu, L Guo, P Ge, X Wang - … research part F: traffic psychology and …, 2022 - Elsevier
Drivers do not need to supervise the L3 automated driving system but have to resume
dynamic driving tasks when necessary, where the takeover request system plays a crucial …

Assessment of trust in automation in the “real world”: Requirements for new trust in automation measurement techniques for use by practitioners

N Tenhundfeld, M Demir… - Journal of Cognitive …, 2022 - journals.sagepub.com
Trust in automation is a foundational principle in Human Factors Engineering. An
understanding of trust can help predict and alter much of human-machine interaction (HMI) …

From trust to trust dynamics: Combining empirical and computational approaches to model and predict trust dynamics in human-autonomy interaction

XJ Yang, Y Guo, C Schemanske - Human-Automation Interaction …, 2022 - Springer
Trust in automation has been identified as one central factor in effective human-autonomy
interaction. Despite active research in the past 30 years, most studies have used a …