Yichao Yan, Jingwei Xu, Bingbing Ni, Xiaokang Yang
TL;DR该研究通过利用单个图像,结合人体骨骼信息、姿态运动、外观参照和triplet loss,构建一个条件 GAN 框架,能够生成更真实的动态人体运动序列。研究数据集包括KTH和Human3.6M。
Abstract
This work make the first attempt to generate articulated human motion sequence from a single image. On the one hand, we utilize paired inputs including human skeleton information as motion embedding and a single human image as appearance reference, to generate novel motion frames, base