BriefGPT.xyz
Jun, 2022
针对自监督视觉变换器的补丁级别表示学习
Patch-level Representation Learning for Self-supervised Vision Transformers
HTML
PDF
Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin
TL;DR
本文设计了一种称为SelfPatch的简单而有效的视觉预训练任务,利用ViT的特性,在无需人工注释的情况下提高不同类型视觉任务的性能,通过训练神经网络对各种图像的无监督学习来实现。
Abstract
Recent
self-supervised learning
(SSL) methods have shown impressive results in learning
visual representations
from unlabeled images. This paper aims to improve their performance further by utilizing the architec
→