May, 2024
StyleMaster:面向灵活样式化图像生成的扩散模型
StyleMaster: Towards Flexible Stylized Image Generation with Diffusion
Models
TL;DRStylized Text-to-Image Generation paper proposes StyleMaster, a framework utilizing pretrained Stable Diffusion for generating images from text prompts, overcoming previous issues with insufficient style and inconsistent semantics. It introduces a multi-source style embedder and dynamic attention adapter to provide improved style embeddings and adaptability, and evaluates the model using objective functions and denoising loss, demonstrating its superior performance in achieving variable target styles while maintaining semantic information.