CTC-based Non-autoregressive Textless Speech-to-Speech Translation

Q Fang, Z Ma, Y Zhou, M Zhang, Y Feng - arXiv preprint arXiv:2406.07330, 2024 - arxiv.org
arXiv preprint arXiv:2406.07330, 2024arxiv.org
Direct speech-to-speech translation (S2ST) has achieved impressive translation quality, but
it often faces the challenge of slow decoding due to the considerable length of speech
sequences. Recently, some research has turned to non-autoregressive (NAR) models to
expedite decoding, yet the translation quality typically lags behind autoregressive (AR)
models significantly. In this paper, we investigate the performance of CTC-based NAR
models in S2ST, as these models have shown impressive results in machine translation …
Direct speech-to-speech translation (S2ST) has achieved impressive translation quality, but it often faces the challenge of slow decoding due to the considerable length of speech sequences. Recently, some research has turned to non-autoregressive (NAR) models to expedite decoding, yet the translation quality typically lags behind autoregressive (AR) models significantly. In this paper, we investigate the performance of CTC-based NAR models in S2ST, as these models have shown impressive results in machine translation. Experimental results demonstrate that by combining pretraining, knowledge distillation, and advanced NAR training techniques such as glancing training and non-monotonic latent alignments, CTC-based NAR models achieve translation quality comparable to the AR model, while preserving up to 26.81 decoding speedup.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果