BriefGPT.xyz
Oct, 2023
遗忘您想遗忘的内容:针对LLMs的高效遗忘方法
Unlearn What You Want to Forget: Efficient Unlearning for LLMs
HTML
PDF
Jiaao Chen, Diyi Yang
TL;DR
提出了一种高效的取消学习框架,通过引入轻量级的取消学习层并与transformers结合,可以在不对整个模型重新训练的情况下有效地更新大型语言模型,以解决用户数据隐私与数据保护法规的问题。实验证明,与现有技术相比,我们提出的方法在分类和生成任务上的有效性得到了验证。
Abstract
large language models
(LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data, however, this process might suffer from
privacy issues
and violations of
→