Deep reinforcement learning for sequence-to-sequence models

Y Keneshloo, T Shi, N Ramakrishnan… - IEEE transactions on …, 2019 - ieeexplore.ieee.org
IEEE transactions on neural networks and learning systems, 2019ieeexplore.ieee.org
In recent times, sequence-to-sequence (seq2seq) models have gained a lot of popularity
and provide state-of-the-art performance in a wide variety of tasks, such as machine
translation, headline generation, text summarization, speech-to-text conversion, and image
caption generation. The underlying framework for all these models is usually a deep neural
network comprising an encoder and a decoder. Although simple encoder-decoder models
produce competitive results, many researchers have proposed additional improvements …
In recent times, sequence-to-sequence (seq2seq) models have gained a lot of popularity and provide state-of-the-art performance in a wide variety of tasks, such as machine translation, headline generation, text summarization, speech-to-text conversion, and image caption generation. The underlying framework for all these models is usually a deep neural network comprising an encoder and a decoder. Although simple encoder-decoder models produce competitive results, many researchers have proposed additional improvements over these seq2seq models, e.g., using an attention-based model over the input, pointer-generation models, and self-attention models. However, such seq2seq models suffer from two common problems: 1) exposure bias and 2) inconsistency between train/test measurement. Recently, a completely novel point of view has emerged in addressing these two problems in seq2seq models, leveraging methods from reinforcement learning (RL). In this survey, we consider seq2seq problems from the RL point of view and provide a formulation combining the power of RL methods in decision-making with seq2seq models that enable remembering long-term memories. We present some of the most recent frameworks that combine the concepts from RL and deep neural networks. Our work aims to provide insights into some of the problems that inherently arise with current approaches and how we can address them with better RL models. We also provide the source code for implementing most of the RL models discussed in this paper to support the complex task of abstractive text summarization and provide some targeted experiments for these RL models, both in terms of performance and training time.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果