BriefGPT.xyz
Oct, 2017
学习差分隐私递归语言模型
Learning Differentially Private Language Models Without Losing Accuracy
HTML
PDF
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang
TL;DR
本文提出了使用联邦平均算法实现用户级差分隐私,以及在保持较高的实用性的同时进行隐私保护的方法。通过在用户分区数据上训练深层网络并进行隐私账户记录,我们证明即使在拥有大量用户的数据集上,实现差分隐私也只会以微不足道的精度损失为代价而非减少实用性。
Abstract
We demonstrate that it is possible to train large
recurrent language models
with
user-level differential privacy
guarantees without sacrificing predictive accuracy. Our work builds on recent advances in the train
→