BriefGPT.xyz
May, 2023
语言建模中公平与隐私之间的权衡
Trade-Offs Between Fairness and Privacy in Language Modeling
HTML
PDF
Cleo Matzken, Steffen Eger, Ivan Habernal
TL;DR
本研究探讨在训练文本生成模型时如何同时兼顾隐私保护和去除社交偏见的问题,经实验证明,保护隐私的同时也会使分类任务中的偏见加剧,为了在双方兼顾的情况下提高模型的效用,在损失一些隐私保护的基础上,通过去偏增强模型可以达到最优化。
Abstract
Protecting privacy in contemporary
nlp models
is gaining in importance. So does the need to mitigate
social biases
of such models. But can we have both at the same time? Existing research suggests that
→