作者
Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, Hua Wu
发表日期
2018/7
研讨会论文
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
页码范围
1118-1127
简介
Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context. In this paper, we investigate matching a response with its multi-turn context using dependency information based entirely on attention. Our solution is inspired by the recently proposed Transformer in machine translation (Vaswani et al., 2017) and we extend the attention mechanism in two ways. First, we construct representations of text segments at different granularities solely with stacked self-attention. Second, we try to extract the truly matched segment pairs with attention across the context and response. We jointly introduce those two kinds of attention in one uniform neural network. Experiments on two large-scale multi-turn response selection tasks show that our proposed model significantly outperforms the state-of-the-art models.
引用总数
20182019202020212022202320241659192626212
学术搜索中的文章
X Zhou, L Li, D Dong, Y Liu, Y Chen, WX Zhao, D Yu… - Proceedings of the 56th Annual Meeting of the …, 2018
X Zhou, L Li, D Dong, Y Liu, Y Chen, WX Zhao, D Yu - Proceedings of the 56th Annual Meeting of the …
𝑋 𝑍ℎ𝑜𝑢, 𝐿 𝐿𝑖, 𝐷 𝐷𝑜𝑛𝑔, 𝑌 𝐿𝑖𝑢, 𝑌 𝐶ℎ𝑒𝑛, 𝑊𝑋 𝑍ℎ𝑎𝑜…