Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey …
Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions. This poses risks when deploying these …
Research on Explainable Artificial Intelligence has recently started exploring the idea of producing explanations that, rather than being expressed in terms of low-level features, are …
Abstract Analysis of how semantic concepts are represented within Convolutional Neural Networks (CNNs) is a widely used approach in Explainable Artificial Intelligence (XAI) for …
AR Asokan, N Kumar, AV Ragam… - 2022 International Joint …, 2022 - ieeexplore.ieee.org
Multimodal Emotion Recognition refers to the classification of input video sequences into emotion labels based on multiple input modalities (usually video, audio and text). In recent …
Concept-based XAI (C-XAI) approaches to explaining neural vision models are a promising field of research, since explanations that refer to concepts (ie, semantically meaningful parts …
Current approaches for explaining deep learning systems applied to musical data provide results in a low-level feature space, eg, by highlighting potentially relevant time-frequency …
An important line of research attempts to explain convolutional neural network (CNN) image classifier predictions and intermediate layer representations in terms of human …
Z Shi, F Lyu, Y Liu, F Shang, F Hu, W Feng… - arXiv preprint arXiv …, 2024 - arxiv.org
Continual Test-Time Adaptation (CTTA) is an emerging and challenging task where a model trained in a source domain must adapt to continuously changing conditions during testing …