medical image segmentation, which is essential for many clinical
applications, has achieved almost human-level performance via data-driven deep
learning techniques. Nevertheless, its performance is predicated on the costly
process of manually annotating a large amount of medical images
提出一种新颖的双消除偏见异构协同(DHC)框架,用于半监督 3D 医学图像分割,该框架通过两种损失加权策略(DistDW 和 DiffDW)动态利用伪标签引导模型解决数据和学习偏差,并且通过协同训练两个准确的子模型显著提高,实验结果表明我们的方法克服了类别不平衡问题,优于当前的半监督学习方法,在更具挑战性的半监督设置中显示出潜力。