BriefGPT.xyz
Mar, 2022
预训练语言模型是否可疑?从因果视角理解隐形风险
Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View
HTML
PDF
Boxi Cao, Hongyu Lin, Xianpei Han, Fangchao Liu, Le Sun
TL;DR
本文探讨了基于问题提示的探测方法可能存在的偏见、不一致性和不可靠性,强调了通过因果干预来消除偏差的必要性,并提出了更好的数据集设计、探测框架和更可靠的预训练语言模型评估标准。
Abstract
prompt-based probing
has been widely used in evaluating the abilities of
pretrained language models
(PLMs). Unfortunately, recent studies have discovered such an
→