From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai

M Nauta, J Trienes, S Pathak, E Nguyen… - ACM Computing …, 2023 - dl.acm.org
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …

[HTML][HTML] Transparency of deep neural networks for medical image analysis: A review of interpretability methods

Z Salahuddin, HC Woodruff, A Chatterjee… - Computers in biology and …, 2022 - Elsevier
Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for
diagnosis and treatment decisions. Deep neural networks have shown the same or better …

[HTML][HTML] Learning disentangled representations in the imaging domain

X Liu, P Sanchez, S Thermos, AQ O'Neil… - Medical Image …, 2022 - Elsevier
Disentangled representation learning has been proposed as an approach to learning
general representations even in the absence of, or with limited, supervision. A good general …

Edge: Explaining deep reinforcement learning policies

W Guo, X Wu, U Khan, X Xing - Advances in Neural …, 2021 - proceedings.neurips.cc
With the rapid development of deep reinforcement learning (DRL) techniques, there is an
increasing need to understand and interpret DRL policies. While recent research has …

Exploring evaluation methods for interpretable machine learning: A survey

N Alangari, M El Bachir Menai, H Mathkour… - Information, 2023 - mdpi.com
In recent times, the progress of machine learning has facilitated the development of decision
support systems that exhibit predictive accuracy, surpassing human capabilities in certain …

Challenges for machine learning in clinical translation of big data imaging studies

NK Dinsdale, E Bluemke, V Sundaresan, M Jenkinson… - Neuron, 2022 - cell.com
Combining deep learning image analysis methods and large-scale imaging datasets offers
many opportunities to neuroscience imaging and epidemiology. However, despite these …

Learn-explain-reinforce: counterfactual reasoning and its guidance to reinforce an Alzheimer's Disease diagnosis model

K Oh, JS Yoon, HI Suk - IEEE Transactions on Pattern Analysis …, 2022 - ieeexplore.ieee.org
Existing studies on disease diagnostic models focus either on diagnostic model learning for
performance improvement or on the visual explanation of a trained diagnostic model. We …

Icam-reg: Interpretable classification and regression with feature attribution for mapping neurological phenotypes in individual scans

C Bass, M Da Silva, C Sudre… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
An important goal of medical imaging is to be able to precisely detect patterns of disease
specific to individual scans; however, this is challenged in brain imaging by the degree of …

Deep Neural Networks in Power Systems: A Review

M Khodayar, J Regan - Energies, 2023 - mdpi.com
Identifying statistical trends for a wide range of practical power system applications,
including sustainable energy forecasting, demand response, energy decomposition, and …

Benchmarking geometric deep learning for cortical segmentation and neurodevelopmental phenotype prediction

A Fawaz, LZJ Williams, A Alansary, C Bass, K Gopinath… - bioRxiv, 2021 - biorxiv.org
The emerging field of geometric deep learning extends the application of convolutional
neural networks to irregular domains such as graphs, meshes and surfaces. Several recent …