On the Scalability of Diffusion-based Text-to-Image Generation
Authors: Hao Li, Yang Zou, Ying Wang, Orchid Majumder, Yusheng Xie, R. Manmatha, Ashwin Swaminathan, Zhuowen Tu, Stefano Ermon, Stefano Soatto
What
This paper investigates the scaling properties of diffusion-based text-to-image models, focusing on the denoising backbone and dataset size to understand how to design and train these models effectively.
Why
This work is important because it provides insights into the design and training of large-scale text-to-image models, which are computationally expensive to develop. The findings offer practical guidance for improving performance and efficiency in this domain.
How
The authors conducted controlled experiments by training various UNet and Transformer architectures with different sizes and configurations. They also curated and used large-scale datasets, analyzing the impact of dataset size, quality, and caption enhancement on model performance. Key metrics like TIFA, ImageReward, FID, CLIP score, and HPSv2 were used to evaluate the models.
Result
The paper demonstrates that SDXL’s UNet design is superior to its counterparts, and strategically increasing its transformer depth is more parameter-efficient for better text-image alignment than solely increasing channel numbers. Additionally, they identified an efficient UNet variant with 45% fewer parameters and 28% faster inference than SDXL, achieving comparable performance. The study also highlights that dataset quality matters more than size, and augmenting datasets with synthetic captions significantly improves training efficiency and performance.
LF
The paper acknowledges limitations in training Transformers from scratch due to a lack of inductive bias compared to UNets, suggesting further exploration of architectural improvements for Transformers in future work. Additionally, while the study provides valuable insights into scaling laws for text-to-image models, it acknowledges the need for further investigation with even larger models and datasets.
Abstract
Scaling up model and data size has been quite successful for the evolution of LLMs. However, the scaling law for the diffusion based text-to-image (T2I) models is not fully explored. It is also unclear how to efficiently scale the model for better performance at reduced cost. The different training settings and expensive training cost make a fair model comparison extremely difficult. In this work, we empirically study the scaling properties of diffusion based T2I models by performing extensive and rigours ablations on scaling both denoising backbones and training set, including training scaled UNet and Transformer variants ranging from 0.4B to 4B parameters on datasets upto 600M images. For model scaling, we find the location and amount of cross attention distinguishes the performance of existing UNet designs. And increasing the transformer blocks is more parameter-efficient for improving text-image alignment than increasing channel numbers. We then identify an efficient UNet variant, which is 45% smaller and 28% faster than SDXL’s UNet. On the data scaling side, we show the quality and diversity of the training set matters more than simply dataset size. Increasing caption density and diversity improves text-image alignment performance and the learning efficiency. Finally, we provide scaling functions to predict the text-image alignment performance as functions of the scale of model size, compute and dataset size.