learning from human feedback has shown success in aligning large, pretrained
models with human values. Prior works have mostly focused on learning from
high-level labels, such as preferences between pairs of model outputs. On the
other hand, many domains could benefit from more involve
本文介绍了一种 Fine-Tuning 方法,使用人类反馈对齐文本到图像的 Deep generative model,通过分析设计选择平衡对齐 - 准确性的权衡,最终通过奖励加权似然优化,使得生成的对象更准确地反映了指定颜色、数量和背景等特征。结果表明,利用人类反馈可以显著改善文本到图像的 Deep generative model 的性能。