LICO: explainable models with language-image consistency

Y Lei, Z Li, Y Li, J Zhang… - Advances in Neural …, 2024 - proceedings.neurips.cc
Interpreting the decisions of deep learning models has been actively studied since the
explosion of deep neural networks. One of the most convincing interpretation approaches is …

Deal: Disentangle and localize concept-level explanations for vlms

T Li, M Ma, X Peng - European Conference on Computer Vision, 2025 - Springer
Abstract Large pre-trained Vision-Language Models (VLMs) have become ubiquitous
foundational components of other models and downstream tasks. Although powerful, our …

Improving visual grounding by encouraging consistent gradient-based explanations

Z Yang, K Kafle, F Dernoncourt… - Proceedings of the …, 2023 - openaccess.thecvf.com
We propose a margin-based loss for tuning joint vision-language models so that their
gradient-based explanations are consistent with region-level annotations provided by …

Studying How to Efficiently and Effectively Guide Models with Explanations

S Rao, M Böhle, A Parchami-Araghi… - Proceedings of the …, 2023 - openaccess.thecvf.com
Despite being highly performant, deep neural networks might base their decisions on
features that spuriously correlate with the provided labels, thus hurting generalization. To …

Are Data-driven Explanations Robust against Out-of-distribution Data?

T Li, F Qiao, M Ma, X Peng - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
As black-box models increasingly power high-stakes applications, a variety of data-driven
explanation methods have been introduced. Meanwhile, machine learning models are …

Allowing humans to interactively guide machines where to look does not always improve a human-AI team's classification accuracy

G Nguyen, MR Taesiri, SSY Kim, A Nguyen - arXiv preprint arXiv …, 2024 - arxiv.org
Via thousands of papers in Explainable AI (XAI), attention maps\cite {vaswani2017attention}
and feature attribution maps\cite {bansal2020sam} have been established as a common …

ConPro: Learning Severity Representation for Medical Images using Contrastive Learning and Preference Optimization

H Nguyen, H Nguyen, M Chang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Understanding the severity of conditions shown in images in medical diagnosis is crucial
serving as a key guide for clinical assessment treatment as well as evaluating longitudinal …

Using explanations to guide models

S Rao, M Böhle, A Parchami-Araghi… - arXiv preprint arXiv …, 2023 - arxiv.org
Deep neural networks are highly performant, but might base their decision on spurious or
background features that co-occur with certain classes, which can hurt generalization. To …

Leveraging saliency priors and explanations for enhanced consistent interpretability

L Dong, L Chen, Z Fu, C Zheng, X Cui… - Expert Systems with …, 2024 - Elsevier
Deep neural networks have emerged as highly effective tools for computer vision systems,
showcasing remarkable performance. However, the intrinsic opacity, potential biases, and …

OCIE: Augmenting model interpretability via Deconfounded Explanation-Guided Learning

L Dong, L Chen, C Zheng, Z Fu, U Zukaib, X Cui… - Knowledge-Based …, 2024 - Elsevier
Deep neural networks (DNNs) often encounter significant challenges related to opacity,
inherent biases, and shortcut learning, which undermine their practical reliability. In this …