BriefGPT.xyz
Jul, 2024
使用用户级差分隐私对大型语言模型进行微调
Fine-Tuning Large Language Models with User-Level Differential Privacy
HTML
PDF
Zachary Charles, Arun Ganesh, Ryan McKenna, H. Brendan McMahan, Nicole Mitchell...
TL;DR
利用用户级差分隐私(DP)进行训练大型语言模型(LLMs)的实用和可扩展算法研究,以可证明地保护每个用户贡献的所有示例;通过实验在固定计算预算下验证结果,发现当需要较高的隐私保证或计算预算较大时,用户级抽样和用户级梯度剪切(ULS)通常能提供更好的结果。
Abstract
We investigate practical and scalable algorithms for training
large language models
(LLMs) with user-level
differential privacy
(DP) in order to provably safeguard all the examples contributed by each user. We st
→