How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps

R Müller - International Journal of Human–Computer Interaction, 2024 - Taylor & Francis
Saliency maps can explain how deep neural networks classify images. But are they actually
useful for humans? The present systematic review of 68 user studies found that while …

[HTML][HTML] Evidence-based XAI: An empirical approach to design more effective and explainable decision support systems

L Famiglini, A Campagner, M Barandas… - Computers in Biology …, 2024 - Elsevier
This paper proposes a user study aimed at evaluating the impact of Class Activation Maps
(CAMs) as an eXplainable AI (XAI) method in a radiological diagnostic task, the detection of …

Invisible to Machines: Designing AI that Supports Vision Work in Radiology

G Anichini, C Natali, F Cabitza - Computer Supported Cooperative Work …, 2024 - Springer
In this article we provide an analysis focusing on clinical use of two deep learning-based
automatic detection tools in the field of radiology. The value of these technologies conceived …

Algorithmic Authority & AI Influence in Decision Settings: Theories and Implications for Design

A Facchini, C Fregosi, C Natali, A Termine… - Proceedings of the 12th …, 2024 - dl.acm.org
This workshop explores the influence of AI systems on human decision-making-algorithmic
authority-and the broader concept of technology dominance, which includes both positive …