Current large-scale generative models have impressive efficiency in
generating high-quality images based on text prompts. However, they lack the
ability to precisely control the size and position of objects in the generated
image. In this study, we analyze the generative mechanism of t
本研究提出了两个新的损失函数,用于在采样过程中根据给定的布局重新聚焦注意力图,以解决在多个对象、属性和空间组合都涉及到的情况下现有文本到图像综合方法无法精确遵循文本提示的问题,并通过 Large Language Models 合成的布局在 DrawBench 和 HRS 基准测试中进行了全面实验,证明了我们提出的方法可以轻松有效地集成到现有的文本到图像方法中,并始终提高其生成图像与文本提示之间的对齐度。