BriefGPT.xyz
May, 2022
连续训练下的联邦后门攻击防御
Towards a Defense against Backdoor Attacks in Continual Federated Learning
HTML
PDF
Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, Sewoong Oh
TL;DR
通过训练两个模型,其中一个不带任何防御机制,另一个结合恶意客户过滤和提前停止以控制攻击成功率,我们提出了防御联邦学习背门攻击的框架——“阴影学习”。该框架在理论上受到了证明,并且实验证明了它显著改善了现有的防御措施。
Abstract
backdoor attacks
are a major concern in
federated learning
(FL) pipelines where training data is sourced from untrusted clients over long periods of time (i.e., continual learning). Preventing such attacks is dif
→