TL;DR提出一种 fine-tuning 方法,可在令牌或句子级别上应用于去偏置预训练上下文嵌入。使用性别偏见为例,我们使用多个 SoTA 上下文表示在多个基准数据集上进行系统研究,并发现对于上下文嵌入模型的所有令牌和所有层应用令牌级去偏置会产生最佳性能。
Abstract
In comparison to the numerous debiasing methods proposed for the static
non-contextualised word embeddings, the discriminative biases in contextualised
embeddings have received relatively little attention. We propose a