BriefGPT.xyz
Aug, 2024
随机梯度掩蔽作为针对联邦学习深层泄漏的防御措施
Random Gradient Masking as a Defensive Measure to Deep Leakage in Federated Learning
HTML
PDF
Joon Kim, Sejin Park
TL;DR
本研究针对联邦学习中深层泄漏问题,评估了四种防御方法的有效性,包括掩蔽、裁剪、修剪和加噪声。研究表明,掩蔽作为防御手段,在保持训练性能的同时,能有效地抵御深层泄漏,展现出优于其他方法的防御能力。
Abstract
Federated Learning
(FL), in theory, preserves privacy of individual clients' data while producing quality machine learning models. However, attacks such as
Deep Leakage
from Gradients(DLG) severely question the pr
→