Recently, synthesizing personalized characters from a single user-given
portrait has received remarkable attention as a drastic popularization of
social media and the metaverse. The input image is not always in frontal view,
thus it is important to acquire or predict canonical view for
在数字内容创作领域中,从单张图像生成高质量的 3D 角色是具有挑战性的,本文介绍了 CharacterGen 框架,该框架通过图像条件的多视角扩散模型和基于转换器的稀疏视角重建模型高效地生成 3D 角色。通过定量和定性实验证明该方法能够生成具有高质量形状和纹理的 3D 角色,可用于后续的蒙皮和动画应用。