BriefGPT.xyz
Jan, 2022
使用知识蒸馏进行联邦遗忘
Federated Unlearning with Knowledge Distillation
HTML
PDF
Chen Wu, Sencun Zhu, Prasenjit Mitra
TL;DR
在联邦学习中,提出了一种新颖的联邦遗忘方法,通过从模型中减去历史累积更新来消除客户端的贡献,并利用知识蒸馏方法恢复模型的性能,而不使用来自客户端的任何数据。该方法不依赖于客户端的参与,不对神经网络的类型有任何限制,并引入后门攻击来评估遗忘效果。实验结果表明了本文方法的有效性和效率。
Abstract
federated learning
(FL) is designed to protect the
data privacy
of each client during the training process by transmitting only models instead of the original data. However, the trained model may memorize certain
→