Content-context factorized representations for automated speech recognition

DM Chan, S Ghosh - arXiv preprint arXiv:2205.09872, 2022 - arxiv.org
arXiv preprint arXiv:2205.09872, 2022arxiv.org
Deep neural networks have largely demonstrated their ability to perform automated speech
recognition (ASR) by extracting meaningful features from input audio frames. Such features,
however, may consist not only of information about the spoken language content, but also
may contain information about unnecessary contexts such as background noise and sounds
or speaker identity, accent, or protected attributes. Such information can directly harm
generalization performance, by introducing spurious correlations between the spoken words …
Deep neural networks have largely demonstrated their ability to perform automated speech recognition (ASR) by extracting meaningful features from input audio frames. Such features, however, may consist not only of information about the spoken language content, but also may contain information about unnecessary contexts such as background noise and sounds or speaker identity, accent, or protected attributes. Such information can directly harm generalization performance, by introducing spurious correlations between the spoken words and the context in which such words were spoken. In this work, we introduce an unsupervised, encoder-agnostic method for factoring speech-encoder representations into explicit content-encoding representations and spurious context-encoding representations. By doing so, we demonstrate improved performance on standard ASR benchmarks, as well as improved performance in both real-world and artificially noisy ASR scenarios.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果