BriefGPT.xyz
Mar, 2021
领域自适应的循环自训练
Cycle Self-Training for Domain Adaptation
HTML
PDF
Hong Liu, Jianmin Wang, Mingsheng Long
TL;DR
提出了循环自训练算法,通过引入Tsallis熵作为可信度友好的正则化方法,使得在无监督领域适应中的伪标签与真实目标具有更好的泛化性,实现基于无标注目标数据的迁移学习。
Abstract
Mainstream approaches for
unsupervised domain adaptation
(UDA) learn domain-invariant representations to bridge domain gap. More recently,
self-training
has been gaining momentum in UDA. Originated from semi-supe
→