We address the task of 3d semantic scene completion, i.e. , given a single
depth image, we predict the semantic labels and occupancy of voxels in a 3D
grid representing the scene. In light of the recently introduced generative
adversarial networks (GAN), our goal is to explore the pote
本文提出了一种新颖的 3D-RecGAN 方法,使用生成对抗网络从单个任意深度视角重建给定对象的完整三维结构,在高维体素空间中通过结合自动编码器和条件生成对抗网络框架的生成能力恢复物体的准确和细粒度的三维结构。广泛的实验表明,该方法在单视角 3D 物体重建方面明显优于现有技术,且能够重建未被见过的物体类型。