BriefGPT.xyz
Jun, 2024
基于逆KL散度的知识蒸馏方法:在大型语言模型中消除个人信息
RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models
HTML
PDF
Bichen Wang, Yuzhe Zi, Yixin Sun, Yanyan Zhao, Bing Qin
TL;DR
透过RKLD算法,我们在实验中达到了显著的遗忘质量并有效地维护了模型的实用性。
Abstract
With the passage of the
right to be forgotten
(RTBF) regulations and the scaling up of language model training datasets, research on
model unlearning
in
→