We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-scale pre-trained language models, which achieve the state-of-the-art privacy versus utility tradeoffs on many stand
在我们的研究中,我们揭示了Differential Privacy(DP)技术在处理Large Language Models(LLMs)的隐私和泛化之间的权衡中,DP训练模型的损失平面的平坦程度起到了关键作用。我们进一步提出了一个全面的框架来强制执行适当的权重平坦度,从而大幅提高模型的泛化能力并保持竞争性的隐私保护。