Incremental learning for end-to-end automatic speech recognition

L Fu, X Li, L Zi, Z Zhang, Y Wu, X He… - 2021 IEEE Automatic …, 2021 - ieeexplore.ieee.org
L Fu, X Li, L Zi, Z Zhang, Y Wu, X He, B Zhou
2021 IEEE Automatic Speech Recognition and Understanding Workshop …, 2021ieeexplore.ieee.org
In this paper, we propose an incremental learning method for end-to-end Automatic Speech
Recognition (ASR) which enables an ASR system to perform well on new tasks while
maintaining the performance on its originally learned ones. To mitigate catastrophic
forgetting during incremental learning, we design a novel explainability-based knowledge
distillation for ASR models, which is combined with a response-based knowledge distillation
to maintain the original model's predictions and the “reason” for the predictions. Our method …
In this paper, we propose an incremental learning method for end-to-end Automatic Speech Recognition (ASR) which enables an ASR system to perform well on new tasks while maintaining the performance on its originally learned ones. To mitigate catastrophic forgetting during incremental learning, we design a novel explainability-based knowledge distillation for ASR models, which is combined with a response-based knowledge distillation to maintain the original model's predictions and the “reason” for the predictions. Our method works without access to the training data of original tasks, which addresses the cases where the previous data is no longer available or joint training is costly. Results on a multi-stage sequential training task show that our method outperforms existing ones in mitigating forgetting. Furthermore, in two practical scenarios, compared to the target-reference joint training method, the performance drop of our method is 0.02% Character Error Rate (CER), which is 97% smaller than the drops of the baseline methods.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果