BriefGPT.xyz
Oct, 2022
i-MAE:掩蔽自编码器中的潜在表示是否线性可分?
i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable?
HTML
PDF
Kevin Zhang, Zhiqiang Shen
TL;DR
本文介绍了一种名为i-MAE的简单而有效的可解释MAE (Interpretable MAE)框架,通过在CIFAR-10/100,Tiny-ImageNet和ImageNet-1K数据集上进行广泛实验,证明其是解释MAE框架行为的优秀设计,并提供了更好的表示能力。
Abstract
masked image modeling
(MIM) has been recognized as a strong and popular
self-supervised pre-training
approach in the vision domain. However, the interpretability of the mechanism and properties of the learned rep
→