Explaining deep neural networks and beyond: A review of methods and applications

W Samek, G Montavon, S Lapuschkin… - Proceedings of the …, 2021 - ieeexplore.ieee.org
With the broader and highly successful usage of machine learning (ML) in industry and the
sciences, there has been a growing demand for explainable artificial intelligence (XAI) …

Causal reasoning meets visual representation learning: A prospective study

Y Liu, YS Wei, H Yan, GB Li, L Lin - Machine Intelligence Research, 2022 - Springer
Visual representation learning is ubiquitous in various real-world applications, including
visual comprehension, video understanding, multi-modal analysis, human-computer …

Toward transparent ai: A survey on interpreting the inner structures of deep neural networks

T Räuker, A Ho, S Casper… - 2023 ieee conference …, 2023 - ieeexplore.ieee.org
The last decade of machine learning has seen drastic increases in scale and capabilities.
Deep neural networks (DNNs) are increasingly being deployed in the real world. However …

CTformer: convolution-free Token2Token dilated vision transformer for low-dose CT denoising

D Wang, F Fan, Z Wu, R Liu, F Wang… - Physics in Medicine & …, 2023 - iopscience.iop.org
Objective. Low-dose computed tomography (LDCT) denoising is an important problem in CT
research. Compared to the normal dose CT, LDCT images are subjected to severe noise …

CX-ToM: Counterfactual explanations with theory-of-mind for enhancing human trust in image recognition models

AR Akula, K Wang, C Liu, S Saba-Sadiya, H Lu… - Iscience, 2022 - cell.com
We propose CX-ToM, short for counterfactual explanations with theory-of-mind, a new
explainable AI (XAI) framework for explaining decisions made by a deep convolutional …

Disentangled explanations of neural network predictions by finding relevant subspaces

P Chormai, J Herrmann, KR Müller… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Explainable AI aims to overcome the black-box nature of complex ML models like neural
networks by generating explanations for their predictions. Explanations often take the form of …

The need for interpretable features: Motivation and taxonomy

A Zytek, I Arnaldo, D Liu, L Berti-Equille… - ACM SIGKDD …, 2022 - dl.acm.org
Through extensive experience developing and explaining machine learning (ML)
applications for real-world domains, we have learned that ML models are only as …

卷积神经网络的可解释性研究综述

窦慧, 张凌茗, 韩峰, 申富饶, 赵健 - 软件学报, 2023 - jos.org.cn
神经网络模型性能日益强大, 被广泛应用于解决各类计算机相关任务, 并表现出非常优秀的能力,
但人类对神经网络模型的运行机制却并不完全理解. 针对神经网络可解释性的研究进行了梳理和 …

Explanatory object part aggregation for zero-shot learning

X Chen, X Deng, Y Lan, Y Long, J Weng… - … on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Zero-shot learning (ZSL) aims to recognize objects from unseen classes only based on
labeled images from seen classes. Most existing ZSL methods focus on optimizing feature …

Understanding neural network through neuron level visualization

H Dou, F Shen, J Zhao, X Mu - Neural Networks, 2023 - Elsevier
Neurons are the fundamental units of neural networks. In this paper, we propose a method
for explaining neural networks by visualizing the learning process of neurons. For a trained …