machine learning systems are often deployed for making critical decisions
like credit lending, hiring, etc. While making decisions, such systems often
encode the user's demographic information (like gender, age) in their
intermediate representations. This can lead to decisions that are
本文提出了一种名为 DualFair 的自我监督模型,可从学到的表示中去除诸如性别和种族等敏感属性的偏差,同时优化两个公平标准,团体公平性和反事实公平性,为团体和个体提供更公平的预测,针对多个数据集进行了详细的分析,表明了该模型的有效性和进一步展示了同时解决两种公平标准的协同效应,同时建议该模型在公平的智能 Web 应用中具有潜在价值。