Concept embedding analysis: A review

G Schwalbe - arXiv preprint arXiv:2203.13909, 2022 - arxiv.org
Deep neural networks (DNNs) have found their way into many applications with potential
impact on the safety, security, and fairness of human-machine-systems. Such require basic …

Explainable image classification: The journey so far and the road ahead

V Kamakshi, NC Krishnan - AI, 2023 - mdpi.com
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address
the interpretability challenges posed by complex machine learning models. In this survey …

From hope to safety: Unlearning biases of deep models via gradient penalization in latent space

M Dreyer, F Pahde, CJ Anders, W Samek… - Proceedings of the …, 2024 - ojs.aaai.org
Deep Neural Networks are prone to learning spurious correlations embedded in the training
data, leading to potentially biased predictions. This poses risks when deploying these …

Interpretability is in the mind of the beholder: A causal framework for human-interpretable representation learning

E Marconato, A Passerini, S Teso - Entropy, 2023 - mdpi.com
Research on Explainable Artificial Intelligence has recently started exploring the idea of
producing explanations that, rather than being expressed in terms of low-level features, are …

Evaluating the stability of semantic concept representations in CNNs for robust explainability

G Mikriukov, G Schwalbe, C Hellert, K Bade - World Conference on …, 2023 - Springer
Abstract Analysis of how semantic concepts are represented within Convolutional Neural
Networks (CNNs) is a widely used approach in Explainable Artificial Intelligence (XAI) for …

Interpretability for multimodal emotion recognition using concept activation vectors

AR Asokan, N Kumar, AV Ragam… - 2022 International Joint …, 2022 - ieeexplore.ieee.org
Multimodal Emotion Recognition refers to the classification of input video sequences into
emotion labels based on multiple input modalities (usually video, audio and text). In recent …

Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?

JH Lee, G Mikriukov, G Schwalbe, S Wermter… - arXiv preprint arXiv …, 2024 - arxiv.org
Concept-based XAI (C-XAI) approaches to explaining neural vision models are a promising
field of research, since explanations that refer to concepts (ie, semantically meaningful parts …

Concept-based techniques for" musicologist-friendly" explanations in a deep music classifier

F Foscarin, K Hoedt, V Praher, A Flexer… - arXiv preprint arXiv …, 2022 - arxiv.org
Current approaches for explaining deep learning systems applied to musical data provide
results in a low-level feature space, eg, by highlighting potentially relevant time-frequency …

Unsupervised interpretable basis extraction for concept–based visual explanations

A Doumanoglou, S Asteriadis… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
An important line of research attempts to explain convolutional neural network (CNN) image
classifier predictions and intermediate layer representations in terms of human …

Controllable Continual Test-Time Adaptation

Z Shi, F Lyu, Y Liu, F Shang, F Hu, W Feng… - arXiv preprint arXiv …, 2024 - arxiv.org
Continual Test-Time Adaptation (CTTA) is an emerging and challenging task where a model
trained in a source domain must adapt to continuously changing conditions during testing …