BriefGPT.xyz
Oct, 2020
揭示背景刻板印象:测量和减轻BERT 的性别偏见
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
HTML
PDF
Marion Bartl, Malvina Nissim, Albert Gatt
TL;DR
研究比较了英语和德语中职业名称和性别指示目标词之间的关联性,并使用Bert模型来检测性别偏见,结果表明非常适合英语,但不适合具有丰富的形态和性别标记的德语等语言,本文强调探究偏见和减轻技术的重要性,特别是在大规模,多语言的语言模型中。
Abstract
contextualized word embeddings
have been replacing standard embeddings as the representational knowledge source of choice in
nlp systems
. Since a variety of biases have previously been found in standard word embe
→