Learning unbiased models on imbalanced datasets is a significant challenge.
Rare classes tend to get a concentrated representation in the classification
space which hampers the generalization of learned boundaries to new test
examples. In this paper, we demonstrate that the bayesian uncertain
本文提出了分布不平衡方法和 Balance True Class Probability 框架(BTCP),通过 Distributional Focal Loss (DFL) 目标函数学习不确定性估计来解决现有模型在不平衡分类数据上的预测过度自信问题。实验结果表明,在多个数据集上 BTCP 表现优异,尤其是在纠正错误分类方面。