BriefGPT.xyz
Apr, 2021
何时何地?考察可解释的分解表示
Where and What? Examining Interpretable Disentangled Representations
HTML
PDF
Xinqi Zhu, Chang Xu, Dacheng Tao
TL;DR
本文提出了一种基于解释性的无人监督学习方法来实现抽象表征,该方法可以通过学习空间遮罩、引入扰动和无人监督模型选择来学习高质量的分解表示。
Abstract
Capturing interpretable variations has long been one of the goals in
disentanglement learning
. However, unlike the independence assumption,
interpretability
has rarely been exploited to encourage disentanglement
→