Towards faithful model explanation in nlp: A survey

Q Lyu, M Apidianaki, C Callison-Burch - Computational Linguistics, 2024 - direct.mit.edu
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …

Faithful and Robust Local Interpretability for Textual Predictions

G Lopardo, F Precioso, D Garreau - arXiv preprint arXiv:2311.01605, 2023 - arxiv.org
Interpretability is essential for machine learning models to be trusted and deployed in critical
domains. However, existing methods for interpreting text models are often complex, lack …

Exploring local interpretability in dimensionality reduction: Analysis and use cases

N Mylonas, I Mollas, N Bassiliades… - Expert Systems with …, 2024 - Elsevier
Dimensionality reduction is a crucial area in artificial intelligence that enables the
visualization and analysis of high-dimensional data. The main use of dimensionality …

HEMAsNet: A Hemisphere Asymmetry Network Inspired by the Brain for Depression Recognition From Electroencephalogram Signals

J Shen, K Li, H Liang, Z Zhao, Y Ma… - IEEE Journal of …, 2024 - ieeexplore.ieee.org
Depression is a prevalent mental disorder that affects a significant portion of the global
population. Despite recent advancements in EEG-based depression recognition models …

On the persistence of multilabel learning, its recent trends, and its open issues

N Mylonas, I Mollas, B Liu… - IEEE Intelligent …, 2023 - ieeexplore.ieee.org
Multilabel data comprise instances associated with multiple binary target variables. The
main learning task from such data is multilabel classification, where the goal is to output a …

Attention Meets Post-hoc Interpretability: A Mathematical Perspective

G Lopardo, F Precioso, D Garreau - arXiv preprint arXiv:2402.03485, 2024 - arxiv.org
Attention-based architectures, in particular transformers, are at the heart of a technological
revolution. Interestingly, in addition to helping obtain state-of-the-art results on a wide range …

[PDF][PDF] Harnessing the Power of Knowledge Graphs to Enhance LLM Explainability in the BioMedical Domain

AH Shariatmadari, S Guo, S Srinivasan, A Zhang - 2024 - llms4science-community.github.io
This paper discusses the critical issue of enhancing the explainability and performance of
Large Language Models (LLMs) in the biomedical domain by leveraging the structural …

[PDF][PDF] On the Adaptability of Attention-Based Interpretability in Different Transformer Architectures for Multi-Class Classification Tasks

G Tsoumakas - project.inria.fr
Transformers are widely recognized as leading models for NLP tasks due to their attention-
based architecture. However, their complexity and numerous parameters hinder the …