BriefGPT.xyz
May, 2022
大语言模型中的差分隐私解码
Differentially Private Decoding in Large Language Models
HTML
PDF
Jimit Majmudar, Christophe Dupuy, Charith Peris, Sami Smaili, Rahul Gupta...
TL;DR
本文提出了一种简单易行、计算轻量化的扰动机制,保证了模型的隐私性,在不影响模型实用性的情况下,可应用于所有LLM模型,解决了LLM在隐私保护与重新训练之间的折中问题。
Abstract
Recent large-scale natural language processing (NLP) systems use a pre-trained
large language model
(LLM) on massive and diverse corpora as a headstart. In practice, the pre-trained model is adapted to a wide array of tasks via
→