BriefGPT.xyz
Oct, 2021
不是所有的噪声都是相同计算的:不同隐私学习从大样本率中受益
Not all noise is accounted equally: How differentially private learning benefits from large sampling rates
HTML
PDF
Friedrich Dörmann, Osvald Frisk, Lars Nørvang Andersen, Christian Fischer Pedersen
TL;DR
本文着重研究了隐私预算的问题,提出了一套训练范式,通过调整噪声比例,使更多的噪声能被纳入隐私预算,从而在保护隐私和维护计算效用之间提供一种更好的平衡方案。
Abstract
Learning often involves sensitive data and as such, privacy preserving extensions to
stochastic gradient descent
(SGD) and other machine learning algorithms have been developed using the definitions of
differential priv
→