J Zhang,
Z Tao, K Guo, H Li, S Zhang - Information Sciences, 2024 - Elsevier
Abstract Knowledge distillation (KD) aims to build a lightweight deep neural network model
under the guidance of a large-scale teacher model for model simplicity. Despite improved …