Unified multimodal punctuation restoration framework for mixed-modality corpus

Y Zhu, L Wu, S Cheng, M Wang - ICASSP 2022-2022 IEEE …, 2022 - ieeexplore.ieee.org
ICASSP 2022-2022 IEEE International Conference on Acoustics …, 2022ieeexplore.ieee.org
The punctuation restoration task aims to correctly punctuate the output transcriptions of
automatic speech recognition systems. Previous punctuation models, either using text only
or demanding the corresponding audio, tend to be constrained by real scenes, where
unpunctuated sentences are a mixture of those with and without audio. This paper proposes
a unified multimodal punctuation restoration framework, named UniPunc, to punctuate the
mixed sentences with a single model. UniPunc jointly represents audio and non-audio …
The punctuation restoration task aims to correctly punctuate the output transcriptions of automatic speech recognition systems. Previous punctuation models, either using text only or demanding the corresponding audio, tend to be constrained by real scenes, where unpunctuated sentences are a mixture of those with and without audio. This paper proposes a unified multimodal punctuation restoration framework, named UniPunc, to punctuate the mixed sentences with a single model. UniPunc jointly represents audio and non-audio samples in a shared latent space, based on which the model learns a hybrid representation and punctuates both kinds of samples. We validate the effectiveness of the UniPunc on real-world datasets, which outperforms various strong baselines (e.g. BERT, MuSe) by at least 0.8 overall F1 scores, making a new state-of-the-art. Extensive experiments show that UniPunc’s design is a pervasive solution: by grafting onto previous models, UniPunc enables them to punctuate on the mixed corpus. Our code is available at github.com/Yaoming95/UniPunc
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果