TL;DR通过使用 Rawls 公正作为选择公平措施及取得平衡的基础,我们可以为 AI 系统中的公平 / 准确度取得权衡提供一个有原则的选择,从而集中关注最脆弱的群体和最有影响该群体的公平措施。
Abstract
In order to monitor and prevent bias in ai systems we can use a wide range of
(statistical) fairness measures. However, it is mathematically impossible to
optimize for all of these measures at the same time. In a