Why skip if you can combine: A simple knowledge distillation technique for intermediate layers

Y Wu, P Passban, M Rezagholizade, Q Liu - arXiv preprint arXiv …, 2020 - arxiv.org
arXiv preprint arXiv:2010.03034, 2020arxiv.org
With the growth of computing power neural machine translation (NMT) models also grow
accordingly and become better. However, they also become harder to deploy on edge
devices due to memory constraints. To cope with this problem, a common practice is to distill
knowledge from a large and accurately-trained teacher network (T) into a compact student
network (S). Although knowledge distillation (KD) is useful in most cases, our study shows
that existing KD techniques might not be suitable enough for deep NMT engines, so we …
With the growth of computing power neural machine translation (NMT) models also grow accordingly and become better. However, they also become harder to deploy on edge devices due to memory constraints. To cope with this problem, a common practice is to distill knowledge from a large and accurately-trained teacher network (T) into a compact student network (S). Although knowledge distillation (KD) is useful in most cases, our study shows that existing KD techniques might not be suitable enough for deep NMT engines, so we propose a novel alternative. In our model, besides matching T and S predictions we have a combinatorial mechanism to inject layer-level supervision from T to S. In this paper, we target low-resource settings and evaluate our translation engines for Portuguese--English, Turkish--English, and English--German directions. Students trained using our technique have 50% fewer parameters and can still deliver comparable results to those of 12-layer teachers.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果