On the explainability of natural language processing deep models

JE Zini, M Awad - ACM Computing Surveys, 2022 - dl.acm.org
Despite their success, deep networks are used as black-box models with outputs that are not
easily explainable during the learning and the prediction phases. This lack of interpretability …

Analysis methods in neural language processing: A survey

Y Belinkov, J Glass - … of the Association for Computational Linguistics, 2019 - direct.mit.edu
The field of natural language processing has seen impressive progress in recent years, with
neural network models replacing many of the traditional systems. A plethora of new models …

Rationalizing neural predictions

T Lei, R Barzilay, T Jaakkola - arXiv preprint arXiv:1606.04155, 2016 - arxiv.org
Prediction without justification has limited applicability. As a remedy, we learn to extract
pieces of input text as justifications--rationales--that are tailored to be short and coherent, yet …

Understanding neural networks through representation erasure

J Li, W Monroe, D Jurafsky - arXiv preprint arXiv:1612.08220, 2016 - arxiv.org
While neural networks have been successfully applied to many natural language processing
tasks, they come at the cost of interpretability. In this paper, we propose a general …

A survey on green deep learning

J Xu, W Zhou, Z Fu, H Zhou, L Li - arXiv preprint arXiv:2111.05193, 2021 - arxiv.org
In recent years, larger and deeper models are springing up and continuously pushing state-
of-the-art (SOTA) results across various fields like natural language processing (NLP) and …

Visualizing and understanding neural models in NLP

J Li, X Chen, E Hovy, D Jurafsky - arXiv preprint arXiv:1506.01066, 2015 - arxiv.org
While neural networks have been successfully applied to many NLP tasks the resulting
vector-based models are very difficult to interpret. For example it's not clear how they …

Interpretable neural predictions with differentiable binary variables

J Bastings, W Aziz, I Titov - arXiv preprint arXiv:1905.08160, 2019 - arxiv.org
The success of neural networks comes hand in hand with a desire for more interpretability.
We focus on text classifiers and make them more interpretable by having them provide a …

Word embedding for understanding natural language: a survey

Y Li, T Yang - Guide to big data applications, 2018 - Springer
Word embedding, where semantic and syntactic features are captured from unlabeled text
data, is a basic procedure in Natural Language Processing (NLP). The extracted features …

Compression of deep learning models for text: A survey

M Gupta, P Agrawal - ACM Transactions on Knowledge Discovery from …, 2022 - dl.acm.org
In recent years, the fields of natural language processing (NLP) and information retrieval (IR)
have made tremendous progress thanks to deep learning models like Recurrent Neural …

Linear algebraic structure of word senses, with applications to polysemy

S Arora, Y Li, Y Liang, T Ma, A Risteski - Transactions of the …, 2018 - direct.mit.edu
Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they
represent when the word is polysemous. Here it is shown that multiple word senses reside in …