contrastive learning (CL)-based self-supervised learning models learn visual
representations in a pairwise manner. Although the prevailing CL model has
achieved great progress, in this paper, we uncover an ever-o
Mutual Contrastive Learning (MCL) is a powerful method for improving feature representations for visual recognition tasks through the mutual interaction and transfer of contrastive distributions among a cohort of networks, achieved through the use of Interactive Contrastive Learning (ICL) to aggregate cross-network embedding information.
自我监督学习是通过利用数据增强策略产生相似表示来捕捉多个样本的共享信息,进而学习包含更加全面特征的模型方法。我们提出了一个称为 CompMod with Meta Comprehensive Regularization 的模块,通过双层优化机制来更新模型,使其能够捕捉到更全面的特征。此外,我们还从信息理论和因果反事实的角度为我们提出的方法提供了理论支持。实验证明,我们的方法在多个基准数据集上在分类、目标检测和实例分割任务中取得了显著的改进。