Understanding black box models has become paramount as systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications. In response …
R Müller - International Journal of Human–Computer Interaction, 2024 - Taylor & Francis
Saliency maps can explain how deep neural networks classify images. But are they actually useful for humans? The present systematic review of 68 user studies found that while …
In this article, we propose a conceptual and methodological framework for measuring the impact of the introduction of AI systems in decision settings, based on the concept of …
This paper proposes a user study aimed at evaluating the impact of Class Activation Maps (CAMs) as an eXplainable AI (XAI) method in a radiological diagnostic task, the detection of …
The emergence of black-box, subsymbolic, and statistical AI systems has motivated a rapid increase in the interest regarding explainable AI (XAI), which encompasses both inherently …
Explainable AI (XAI) has the potential to enhance decision-making in human-AI collaborations, yet existing research indicates that explanations can also lead to undue …
A comprehensive assessment of the impact of eXplainable AI (XAI) on diagnostic decision- making should adopt a socio-technical perspective. Our study focuses on Decision Support …
CI Eke, L Shuib - Neural Computing and Applications, 2024 - Springer
The healthcare sector has advanced significantly as a result of the ability of artificial intelligence (AI) to solve cognitive problems that once required human intelligence. As …
Recent advances in technology have propelled Artificial Intelligence (AI) into a crucial role in everyday life, enhancing human performance through sophisticated models and algorithms …