Z Zhang, B He, Z Zhang - … Conference on Acoustics, Speech …, 2021 - ieeexplore.ieee.org
… high separation quality, we propose a new transformer-based speechseparation approach, … The overall architecture of TransMask: Transformer is run over a group of RNNs in order to …
… Abstract—Transformers have enabled impressive improvements in … in speechseparation with the WSJ0-2/3 Mix datasets. This paper studies in-depth Transformers for speechseparation…
… This paper explores Transformer-based speech separation with … a novel small-footprint speech separation model built upon the … grouptransformer for long sequence modeling in speech …
S Zhao, B Ma - … International Conference on Acoustics, Speech …, 2023 - ieeexplore.ieee.org
… by building on standard Transformer with multi-head self-… Transformer models to learn local feature patterns. In this work, we propose a novel Monaural speechseparationTransFormer (…
… Transformer based speechseparation architecture was proposed in [12], achieving the state of the art separation … It was also reported in [15] that incorporating Transformer into an end-to…
… In this paper, an ultra fast speechseparationTransformer … speechseparation on LibriCSS dataset. Utilizing more unlabeled … to recover the clean speech, where a group of masks M(t, f)=[…
S Mo, Y Tian - arXiv preprint arXiv:2407.03736, 2024 - arxiv.org
… Then we adopt 6 selfattention transformer layers to extract … In addition, they leveraged multiple grouping stages during … speechseparation,” arXiv preprint arXiv:1804.03619, 2018. 1…
S Zhao, Y Ma, C Ni, C Zhang, H Wang… - … Acoustics, Speech …, 2024 - ieeexplore.ieee.org
Our previously proposed MossFormer has achieved promising performance in monaural speechseparation. However, it predominantly adopts a self-attention-based MossFormer …
… speechseparation… Transformer-based speechseparation with a reduced computational cost. Our main contribution is the development of the Resource-Efficient SeparationTransformer …