Multi-document summarization via deep learning techniques: A survey

C Ma, WE Zhang, M Guo, H Wang, QZ Sheng - ACM Computing Surveys, 2022 - dl.acm.org
Multi-document summarization (MDS) is an effective tool for information aggregation that
generates an informative and concise summary from a cluster of topic-related documents …

Going beyond xai: A systematic survey for explanation-guided learning

Y Gao, S Gu, J Jiang, SR Hong, D Yu, L Zhao - ACM Computing Surveys, 2024 - dl.acm.org
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing
DNNs become more complex and diverse, ranging from improving a conventional model …

ERASER: A benchmark to evaluate rationalized NLP models

J DeYoung, S Jain, NF Rajani, E Lehman… - arXiv preprint arXiv …, 2019 - arxiv.org
State-of-the-art models in NLP are now predominantly based on deep neural networks that
are opaque in terms of how they come to make predictions. This limitation has increased …

Explanations from large language models make small reasoners better

S Li, J Chen, Y Shen, Z Chen, X Zhang, Z Li… - arXiv preprint arXiv …, 2022 - arxiv.org
Integrating free-text explanations to in-context learning of large language models (LLM) is
shown to elicit strong reasoning capabilities along with reasonable explanations. In this …

Do feature attribution methods correctly attribute features?

Y Zhou, S Booth, MT Ribeiro, J Shah - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Feature attribution methods are popular in interpretable machine learning. These methods
compute the attribution of each input feature to represent its importance, but there is no …

Measuring association between labels and free-text rationales

S Wiegreffe, A Marasović, NA Smith - arXiv preprint arXiv:2010.12762, 2020 - arxiv.org
In interpretable NLP, we require faithful rationales that reflect the model's decision-making
process for an explained instance. While prior work focuses on extractive rationales (a …

Explaining black box predictions and unveiling data artifacts through influence functions

X Han, BC Wallace, Y Tsvetkov - arXiv preprint arXiv:2005.06676, 2020 - arxiv.org
Modern deep learning models for NLP are notoriously opaque. This has motivated the
development of methods for interpreting such models, eg, via gradient-based saliency maps …

Towards faithful model explanation in nlp: A survey

Q Lyu, M Apidianaki, C Callison-Burch - Computational Linguistics, 2024 - direct.mit.edu
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …

Explaining NLP models via minimal contrastive editing (MiCE)

A Ross, A Marasović, ME Peters - arXiv preprint arXiv:2012.13985, 2020 - arxiv.org
Humans have been shown to give contrastive explanations, which explain why an observed
event happened rather than some other counterfactual event (the contrast case). Despite the …

Can rationalization improve robustness?

H Chen, J He, K Narasimhan, D Chen - arXiv preprint arXiv:2204.11790, 2022 - arxiv.org
A growing line of work has investigated the development of neural NLP models that can
produce rationales--subsets of input that can explain their model predictions. In this paper …