BriefGPT.xyz
Mar, 2020
反演梯度——联邦学习隐私保护的易破性探究
Inverting Gradients -- How easy is it to break privacy in federated learning?
HTML
PDF
Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, Michael Moeller
TL;DR
本文研究联邦学习中参数梯度共享机制的安全性,并通过实证研究表明,在计算机视觉领域,即使对多个迭代或多个图像进行梯度平均处理,也无法保护用户隐私。
Abstract
The idea of
federated learning
is to collaboratively train a
neural network
on a server. Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data.
→