Time-frequency Network for Robust Speaker Recognition

J Li, T Zhang, X Liu, L Zheng - arXiv preprint arXiv:2303.02673, 2023 - arxiv.org
J Li, T Zhang, X Liu, L Zheng
arXiv preprint arXiv:2303.02673, 2023arxiv.org
The wide deployment of speech-based biometric systems usually demands high-
performance speaker recognition algorithms. However, most of the prior works for speaker
recognition either process the speech in the frequency domain or time domain, which may
produce suboptimal results because both time and frequency domains are important for
speaker recognition. In this paper, we attempt to analyze the speech signal in both time and
frequency domains and propose the time-frequency network~(TFN) for speaker recognition …
The wide deployment of speech-based biometric systems usually demands high-performance speaker recognition algorithms. However, most of the prior works for speaker recognition either process the speech in the frequency domain or time domain, which may produce suboptimal results because both time and frequency domains are important for speaker recognition. In this paper, we attempt to analyze the speech signal in both time and frequency domains and propose the time-frequency network~(TFN) for speaker recognition by extracting and fusing the features in the two domains. Based on the recent advance of deep neural networks, we propose a convolution neural network to encode the raw speech waveform and the frequency spectrum into domain-specific features, which are then fused and transformed into a classification feature space for speaker recognition. Experimental results on the publicly available datasets TIMIT and LibriSpeech show that our framework is effective to combine the information in the two domains and performs better than the state-of-the-art methods for speaker recognition.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果