VE-KWS: Visual modality enhanced end-to-end keyword spotting

A Zhang, H Wang, P Guo, Y Fu, L Xie… - ICASSP 2023-2023 …, 2023 - ieeexplore.ieee.org
A Zhang, H Wang, P Guo, Y Fu, L Xie, Y Gao, S Zhang, J Feng
ICASSP 2023-2023 IEEE International Conference on Acoustics …, 2023ieeexplore.ieee.org
The performance of the keyword spotting (KWS) system based on audio modality, commonly
measured in false alarms and false rejects, degrades significantly under the far field and
noisy conditions. Therefore, audio-visual keyword spotting, which leverages complementary
relationships over multiple modalities, has recently gained much attention. However, current
studies mainly focus on combining the exclusively learned representations of different
modalities, instead of exploring the modal relationships during each respective modeling. In …
The performance of the keyword spotting (KWS) system based on audio modality, commonly measured in false alarms and false rejects, degrades significantly under the far field and noisy conditions. Therefore, audio-visual keyword spotting, which leverages complementary relationships over multiple modalities, has recently gained much attention. However, current studies mainly focus on combining the exclusively learned representations of different modalities, instead of exploring the modal relationships during each respective modeling. In this paper, we propose a novel visual modality enhanced end-to-end KWS framework (VE-KWS), which fuses audio and visual modalities from two aspects. The first one is utilizing the speaker location information obtained from the lip region in videos to assist the training of multi-channel audio beamformer. By involving the beamformer as an audio enhancement module, the acoustic distortions, caused by the far field or noisy environments, could be significantly suppressed. The other one is conducting cross-attention between different modalities to capture the inter-modal relationships and help the representation learning of each modality. Experiments on the MSIP challenge corpus show that our proposed model achieves a 2.79% false rejection rate and a 2.95% false alarm rate on the Eval set, resulting in a new SOTA performance compared with the top-ranking systems in the ICASSP2022 MISP challenge.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果