Human-like explanation for text classification with limited attention supervision

D Zhang, C Sen, J Thadajarassiri… - … conference on big …, 2021 - ieeexplore.ieee.org
2021 ieee international conference on big data (big data), 2021ieeexplore.ieee.org
Human-like explanation for text classification is essential for high-impact settings such as
healthcare where human rationales are required to support specialists' decisions.
Conventional approaches learn explanations using attention mechanisms to assign heavy
weights to words that have a high impact on a model's prediction. However, such heavily-
weighted words often do not reflect human intuition. To advance human rationale, recent
studies propose to supervise attention mechanisms assuming access to a huge set of …
Human-like explanation for text classification is essential for high-impact settings such as healthcare where human rationales are required to support specialists’ decisions. Conventional approaches learn explanations using attention mechanisms to assign heavy weights to words that have a high impact on a model’s prediction. However, such heavily-weighted words often do not reflect human intuition. To advance human rationale, recent studies propose to supervise attention mechanisms assuming access to a huge set of attention labels collected from humans, called human attention maps (HAMs). Unfortunately, acquiring such HAMs for a huge dataset is very tedious, error-prone, and expensive in practice. Thus, we propose the novel problem of text classification with limited human attention supervision. Specifically, we study the learning of human-like attention weights from a dataset in which all documents contain classification labels but only a few documents provide HAMs. To this end, we design a deep learning architecture, HELAS: Human-like Explanation with Limited Attention Supervision to adaptively learn attention weights that focus on words analogous to a human with very limited attention supervision. HELAS effectively unifies joint learning improving both tasks of text classification and humanlike explanation even with only insufficient supervision labels for the latter task. Our experiments show that HELAS generates attention maps similar to real human annotations raising similarity scores up to 22% over state-of-the-art alternatives, even with as little as 2% of the documents having HAMs. It concurrently improves text classification by driving accuracy up to 19% over four state-of-the-art methods.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果