TL;DR这项研究探讨在多语言环境中评估和减少性别偏见在语言模型中的挑战,并通过 DisCo 扩展到不同的印度语言来创建了一个评估预训练屏蔽语言模型中性别偏见的基准,同时评估了各种方法对 SOTA 大规模多语言模型减轻此类偏见的有效性。
Abstract
While understanding and removing gender biases in language models has been a long-standing problem in Natural Language Processing, prior research work has primarily been limited to English. In this work, we inves