BriefGPT.xyz
Jan, 2024
稳定扩散XL的渐进式知识蒸馏以层级损失
Progressive Knowledge Distillation Of Stable Diffusion XL Using Layer Level Loss
HTML
PDF
Yatharth Gupta, Vishnu V. Jaddipal, Harish Prabhala, Sayak Paul, Patrick Von Platen
TL;DR
通过降低模型大小和知识蒸馏,我们引入了两种简化的Stable Diffusion XL模型(SSD-1B和Segmind-Vega),并证明了在保持高质量生成能力的同时减少模型大小的有效性。
Abstract
stable diffusion xl
(SDXL) has become the best open source
text-to-image model
(T2I) for its versatility and top-notch image quality. Efficiently addressing the
→