BriefGPT.xyz
Jan, 2019
MAE:变分自编码器的相互后验分歧正则化
MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders
HTML
PDF
Xuezhe Ma, Chunting Zhou, Eduard Hovy
TL;DR
本文介绍了一种新的正则化方法 mutual posterior-divergence regularization,用于控制潜空间的几何结构,从而实现有意义的表征学习,并在三个图像基准数据集上取得了良好的表现。
Abstract
variational autoencoder
(VAE), a simple and effective deep
generative model
, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. However, recen
→