Explainable artificial intelligence to increase transparency for revolutionizing healthcare ecosystem and the road ahead

S Roy, D Pal, T Meena - Network Modeling Analysis in Health Informatics …, 2023 - Springer
The integration of deep learning (DL) into co-clinical applications has generated substantial
interest among researchers aiming to enhance clinical decision support systems for various …

Uncertainty-aware and lesion-specific image synthesis in multiple sclerosis magnetic resonance imaging: a multicentric validation study

T Finck, H Li, S Schlaeger, L Grundl… - Frontiers in …, 2022 - frontiersin.org
Generative adversarial networks (GANs) can synthesize high-contrast MRI from lower-
contrast input. Targeted translation of parenchymal lesions in multiple sclerosis (MS), as well …

A dataset generation framework for evaluating megapixel image classifiers and their explanations

G Machiraju, S Plevritis, P Mallick - European Conference on Computer …, 2022 - Springer
Deep learning-based megapixel image classifiers have exceptional prediction performance
in a number of domains, including clinical pathology. However, extracting reliable, human …

On the robustness of explanations of deep neural network models: A survey

A Jyoti, KB Ganesh, M Gayala, NL Tunuguntla… - arXiv preprint arXiv …, 2022 - arxiv.org
Explainability has been widely stated as a cornerstone of the responsible and trustworthy
use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) …

[图书][B] Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XX

S Avidan, G Brostow, M Cissé, GM Farinella, T Hassner - 2022 - books.google.com
The 39-volume set, comprising the LNCS books 13661 until 13699, constitutes the refereed
proceedings of the 17th European Conference on Computer Vision, ECCV 2022, held in Tel …

Prospector heads: generalized feature attribution for large models & data

G Machiraju, A Derry, A Desai, N Guha, AH Karimi… - ArXiv, 2024 - pmc.ncbi.nlm.nih.gov
Feature attribution, the ability to localize regions of the input data that are relevant for
classification, is an important capability for ML models in scientific and biomedical domains …

Rethinking Robustness of Model Attributions

S Kamath, S Mittal, A Deshpande… - Proceedings of the …, 2024 - ojs.aaai.org
For machine learning models to be reliable and trustworthy, their decisions must be
interpretable. As these models find increasing use in safety-critical applications, it is …

Can we trust explainable ai methods on asr? an evaluation on phoneme recognition

X Wu, P Bell, A Rajan - ICASSP 2024-2024 IEEE International …, 2024 - ieeexplore.ieee.org
Explainable AI (XAI) techniques have been widely used to help explain and understand the
output of deep learning models in fields such as image classification and Natural Language …

Building trust in deep learning-based immune response predictors with interpretable explanations

P Borole, A Rajan - Communications biology, 2024 - nature.com
The ability to predict whether a peptide will get presented on Major Histocompatibility
Complex (MHC) class I molecules has profound implications in designing vaccines …

A Quantitative Approach for Evaluating Disease Focus and Interpretability of Deep Learning Models for Alzheimer's Disease Classification

TYC Tam, L Liang, K Chen, H Wang… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
Deep learning (DL) models have shown significant potential in Alzheimer's Disease (AD)
classification. However, understanding and interpreting these models remains challenging …