BriefGPT.xyz
Oct, 2024
大型语言模型的预训练蒸馏:设计空间探索
Pre-training Distillation for Large Language Models: A Design Space Exploration
HTML
PDF
Hao Peng, Xin Lv, Yushi Bai, Zijun Yao, Jiajie Zhang...
TL;DR
本文解决了大型语言模型(LLMs)知识蒸馏在预训练阶段的应用问题。提出了名为预训练蒸馏(PD)的新方法,并通过系统的设计空间探索,发现更有效的配置,尤其是较大的学生模型在预训练蒸馏中受益更多。此研究为未来的预训练蒸馏实践提供了指导。
Abstract
knowledge distillation
(KD) aims to transfer knowledge from a large teacher model to a smaller student model. Previous work applying KD in the field of
large language models
(LLMs) typically focused on the post-t
→