BriefGPT.xyz
Sep, 2022
预训练印度语言模型的高效性别去偏见
Efficient Gender Debiasing of Pre-trained Indic Language Models
HTML
PDF
Neeraja Kirtane, V Manushree, Aditya Kane
TL;DR
本文针对印度语言,量化职业中的性别偏见,并通过有效的微调方法减缓其中存在的偏见,以建立更公平的系统。
Abstract
The
gender bias
present in the data on which
language models
are pre-trained gets reflected in the systems that use these models. The model's intrinsic
→