Advances in deep generative networks have led to impressive results in recent
years. Nevertheless, such models can often waste their capacity on the minutiae
of datasets, presumably due to weak inductive biases in their decoders. This is
where graphics engines may come in handy since t
本文利用可微分渲染器提取并分离生成模型中的 3D 知识,将生成对抗网络作为多视图数据生成器,使用现成的可微分渲染器训练逆向图形网络,并将训练好的逆向图形网络作为教师,将 GAN 的潜在向量分离为可解释的 3D 属性。我们的方法在现有数据集上对最先进的逆向图形网络进行定量和用户研究,并显示分离的 GAN 作为可控的 3D “神经渲染器”,补充传统的图形渲染器。