image-to-image translation aims to preserve source contents while translating
to discriminative target styles between two visual domains. Most works apply
adversarial learning in the ambient image space, which could be computationally
expensive and challenging to train. In this paper,
提出了一种新颖的图像生成方法:latent space anchoring,可以实现在不需要调整现有域的编解码器的情况下扩展到新的视觉域,该方法通过学习轻量级编码器和回归器来将不同域的图像锚定到相同的冻结 GANs 的潜在空间中,并且其编码器和解码器可以任意组合进行图像翻译,实验表明,该方法在标准和可伸缩的 UNIT 任务上均表现优异。