HMM-free encoder pre-training for streaming RNN transducer

L Huang, J Sun, Y Tang, J Hou, J Chen… - arXiv preprint arXiv …, 2021 - arxiv.org
arXiv preprint arXiv:2104.10764, 2021arxiv.org
This work describes an encoder pre-training procedure using frame-wise label to improve
the training of streaming recurrent neural network transducer (RNN-T) model. Streaming
RNN-T trained from scratch usually performs worse than non-streaming RNN-T. Although it
is common to address this issue through pre-training components of RNN-T with other
criteria or frame-wise alignment guidance, the alignment is not easily available in end-to-
end manner. In this work, frame-wise alignment, used to pre-train streaming RNN-T's …
This work describes an encoder pre-training procedure using frame-wise label to improve the training of streaming recurrent neural network transducer (RNN-T) model. Streaming RNN-T trained from scratch usually performs worse than non-streaming RNN-T. Although it is common to address this issue through pre-training components of RNN-T with other criteria or frame-wise alignment guidance, the alignment is not easily available in end-to-end manner. In this work, frame-wise alignment, used to pre-train streaming RNN-T's encoder, is generated without using a HMM-based system. Therefore an all-neural framework equipping HMM-free encoder pre-training is constructed. This is achieved by expanding the spikes of CTC model to their left/right blank frames, and two expanding strategies are proposed. To our best knowledge, this is the first work to simulate HMM-based frame-wise label using CTC model for pre-training. Experiments conducted on LibriSpeech and MLS English tasks show the proposed pre-training procedure, compared with random initialization, reduces the WER by relatively 5%~11% and the emission latency by 60 ms. Besides, the method is lexicon-free, so it is friendly to new languages without manually designed lexicon.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果