Counterfactual explanations and algorithmic recourses for machine learning: A review

S Verma, V Boonsanong, M Hoang, K Hines… - ACM Computing …, 2020 - dl.acm.org
Machine learning plays a role in many deployed decision systems, often in ways that are
difficult or impossible to understand by human stakeholders. Explaining, in a human …

Local interpretations for explainable natural language processing: A survey

S Luo, H Ivison, SC Han, J Poon - ACM Computing Surveys, 2024 - dl.acm.org
As the use of deep learning techniques has grown across various fields over the past
decade, complaints about the opaqueness of the black-box models have increased …

Contrastive data and learning for natural language processing

R Zhang, Y Ji, Y Zhang… - Proceedings of the 2022 …, 2022 - aclanthology.org
Current NLP models heavily rely on effective representation learning algorithms. Contrastive
learning is one such technique to learn an embedding space such that similar data sample …

Finding order in chaos: A novel data augmentation method for time series in contrastive learning

BU Demirel, C Holz - Advances in Neural Information …, 2024 - proceedings.neurips.cc
The success of contrastive learning is well known to be dependent on data augmentation.
Although the degree of data augmentations has been well controlled by utilizing pre-defined …

Generating knowledge aware explanation for natural language inference

Z Yang, Y Xu, J Hu, S Dong - Information Processing & Management, 2023 - Elsevier
Natural language inference (NLI) is an increasingly important task of natural language
processing, and the explainable NLI generates natural language explanations (NLEs) in …

LMExplainer: a Knowledge-Enhanced Explainer for Language Models

Z Chen, AK Singh, M Sra - arXiv preprint arXiv:2303.16537, 2023 - arxiv.org
Large language models (LLMs) such as GPT-4 are very powerful and can process different
kinds of natural language processing (NLP) tasks. However, it can be difficult to interpret the …

Distinguish before answer: Generating contrastive explanation as knowledge for commonsense question answering

Q Chen, G Xu, M Yan, J Zhang, F Huang, L Si… - arXiv preprint arXiv …, 2023 - arxiv.org
Existing knowledge-enhanced methods have achieved remarkable results in certain QA
tasks via obtaining diverse knowledge from different knowledge bases. However, limited by …

Xplainllm: A qa explanation dataset for understanding llm decision-making

Z Chen, J Chen, M Gaidhani, A Singh, M Sra - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have recently made impressive strides in natural language
understanding tasks. Despite their remarkable performance, understanding their decision …

A Survey on Natural Language Counterfactual Generation

Y Wang, X Qiu, Y Yue, X Guo, Z Zeng, Y Feng… - arXiv preprint arXiv …, 2024 - arxiv.org
Natural Language Counterfactual generation aims to minimally modify a given text such that
the modified text will be classified into a different class. The generated counterfactuals …

Large language models as faithful explainers

YN Chuang, G Wang, CY Chang, R Tang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have recently become proficient in addressing complex
tasks by utilizing their rich internal knowledge and reasoning ability. Consequently, this …