TL;DR本研究提出了一个解决图像生成的新框架,通过拆分成结构生成和风格生成两个 GAN 组件,使用附加的损失函数和联合学习,该模型可以生成更真实的图像,用于学习无监督的 RGBD 表示。
Abstract
Current generative frameworks use end-to-end learning and generate images by
sampling from uniform noise distribution. However, these approaches ignore the
most basic principle of image formation: images are product of: (a) Structure:
the underlying 3D model; (b) Style: the texture map