Yunfeng Zhang, Rachel K. E. Bellamy, Kush R. Varshney
TL;DR本文提出了一种用于优化 AI 模型的框架和一些示例方法,根据人类政策制定者的偏好来平衡公平性、模型准确性等目标之间的平衡,以此来减少偏差和不公平。
Abstract
Today, ai is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of →