Dual-Space Knowledge Distillation for Large Language Models

S Zhang, X Zhang, Z Sun, Y Chen, J Xu - arXiv preprint arXiv:2406.17328, 2024 - arxiv.org
Knowledge distillation (KD) is known as a promising solution to compress large language
models (LLMs) via transferring their knowledge to smaller models. During this process, white …

Knowledge distillation of large language models

Y Gu, L Dong, F Wei, M Huang - arXiv preprint arXiv:2306.08543, 2023 - arxiv.org
Knowledge Distillation (KD) is a promising technique for reducing the high computational
demand of large language models (LLMs). However, previous KD methods are primarily …

MiniLLM: Knowledge distillation of large language models

Y Gu, L Dong, F Wei, M Huang - The Twelfth International …, 2024 - openreview.net
Knowledge Distillation (KD) is a promising technique for reducing the high computational
demand of large language models (LLMs). However, previous KD methods are primarily …

Enhancing Knowledge Distillation of Large Language Models through Efficient Multi-Modal Distribution Alignment

T Peng, J Zhang - arXiv preprint arXiv:2409.12545, 2024 - arxiv.org
Knowledge distillation (KD) is an effective model compression method that can transfer the
internal capabilities of large language models (LLMs) to smaller ones. However, the multi …

Ddk: Distilling domain knowledge for efficient large language models

J Liu, C Zhang, J Guo, Y Zhang, H Que, K Deng… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite the advanced intelligence abilities of large language models (LLMs) in various
applications, they still face significant computational and storage demands. Knowledge …

SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models

J Koo, Y Hwang, Y Kim, T Kang, H Bae… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite the success of Large Language Models (LLMs), they still face challenges related to
high inference costs and memory requirements. To address these issues, Knowledge …

Dynamic knowledge distillation for pre-trained language models

L Li, Y Lin, S Ren, P Li, J Zhou, X Sun - arXiv preprint arXiv:2109.11295, 2021 - arxiv.org
Knowledge distillation~(KD) has been proved effective for compressing large-scale pre-
trained language models. However, existing methods conduct KD statically, eg, the student …

PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning

G Kim, D Jang, E Yang - arXiv preprint arXiv:2402.12842, 2024 - arxiv.org
Recent advancements in large language models (LLMs) have raised concerns about
inference costs, increasing the need for research into model compression. While knowledge …

Mixkd: Towards efficient distillation of large-scale language models

KJ Liang, W Hao, D Shen, Y Zhou, W Chen… - arXiv preprint arXiv …, 2020 - arxiv.org
Large-scale language models have recently demonstrated impressive empirical
performance. Nevertheless, the improved results are attained at the price of bigger models …

Towards efficient pre-trained language model via feature correlation distillation

K Huang, X Guo, M Wang - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Abstract Knowledge Distillation (KD) has emerged as a promising approach for compressing
large Pre-trained Language Models (PLMs). The performance of KD relies on how to …