self-supervised learning has drawn attention through its effectiveness in learning in-domain representations with no ground-truth annotations; in particular, it is shown that properly designed pretext tasks (e.g.
本文提出一个自监督的ShotCoL方法,其利用类比学习学习镜头表示,用于发现场景边界和广告插入时间。(This paper proposes a self-supervised ShotCoL method that utilizes contrastive learning to obtain shot representation for detecting scene boundaries and ad insertion timestamps.)