BriefGPT.xyz
Jul, 2024
大型语言模型使用案例中评估偏见和公平性的可执行框架
An Actionable Framework for Assessing Bias and Fairness in Large Language Model Use Cases
HTML
PDF
Dylan Bouchard
TL;DR
该研究旨在为从业者提供技术指南,以评估大型语言模型(LLMs)使用情况中的偏见和公平风险。研究通过分类LLM偏见和公平风险,并形式化定义各种评估指标来提供决策框架,以确定特定LLM使用情况下应使用哪些指标。
Abstract
large language models
(LLMs) can exhibit
bias
in a variety of ways. Such biases can create or exacerbate unfair outcomes for certain groups within a protected attribute, including, but not limited to sex, race, s
→