BriefGPT.xyz
May, 2023
测量大型语言模型中的文化偏见:祈祷后喝啤酒?
Having Beer after Prayer? Measuring Cultural Bias in Large Language Models
HTML
PDF
Tarek Naous, Michael J. Ryan, Wei Xu
TL;DR
本文探讨语言模型是否存在文化偏见,研究发现目前的语言模型在处理和生成阿拉伯语文本时存在明显的西方文化偏见,特别是在人名、食物、服装、地点、文学、饮料、宗教和体育等八个方面。同时,研究表明,向模型提供文化指示符或相关文化示范可以帮助消除偏见。
Abstract
Are
language models
culturally biased? It is important that
language models
conform to the cultural aspects of the communities they serve. However, we show in this paper that
→