BriefGPT.xyz
Apr, 2021
知识蒸馏作为半参数推断
Knowledge Distillation as Semiparametric Inference
HTML
PDF
Tri Dao, Govinda M Kamath, Vasilis Syrgkanis, Lester Mackey
TL;DR
使用半参数推断方法将知识蒸馏转换为目标学生模型、未知贝叶斯类概率和教师概率的plug-in估计值,引入交叉适应和损失校正两种方式来改善教师过度拟合和欠拟合对学生性能的影响,为标准蒸馏的预测误差提供了新的保证,并在表格式和图像数据上进行实证验证,观察到与知识蒸馏增强相关的一致改进。
Abstract
A popular approach to
model compression
is to train an inexpensive
student model
to mimic the class probabilities of a highly accurate but cumbersome
→