Any-Size-Diffusion: Toward Efficient Text-Driven Synthesis for Any-Size HD Images

Authors: Qingping Zheng, Yuanfan Guo, Jiankang Deng, Jianhua Han, Ying Li, Songcen Xu, Hang Xu

What

This paper introduces Any-Size-Diffusion (ASD), a two-stage pipeline designed to generate well-composed images of arbitrary sizes from text prompts, addressing the resolution-induced composition problems in existing text-to-image synthesis models.

Why

This paper is important because it tackles the limitation of existing text-to-image models like Stable Diffusion, which often struggle to maintain good composition when generating images at different resolutions. The proposed ASD model allows for flexible image size generation while preserving compositional quality, significantly enhancing the capabilities of text-to-image synthesis.

How

The ASD pipeline works in two stages: 1) Any Ratio Adaptability Diffusion (ARAD) is trained on multi-aspect ratio images to generate an image based on text prompt and size, minimizing composition issues. 2) Fast Seamless Tiled Diffusion (FSTD) enlarges the ARAD output to any desired size using a novel implicit overlap technique during tiled sampling, ensuring both speed and seamless image magnification.

Result

ASD demonstrates superior performance in generating well-composed images of arbitrary sizes, confirmed through quantitative and qualitative evaluation. Experiments show ASD achieves a 33.49 reduction in FID score compared to the baseline Stable Diffusion model and generates images up to 9 times higher resolution on the same hardware. The implicit overlap in FSTD effectively addresses seaming artifacts common in tiled diffusion methods, achieving high-fidelity image magnification while maintaining a speed comparable to non-overlapping tiling.

LF

The paper acknowledges a potential limitation in the computational cost associated with increasing the number of tiles in FSTD for higher resolutions. Future work could explore optimization strategies to mitigate this, further enhancing the model’s efficiency. Additionally, the authors suggest exploring the application of ASD in other domains such as video generation and 3D object synthesis.

Abstract

Stable diffusion, a generative model used in text-to-image synthesis, frequently encounters resolution-induced composition problems when generating images of varying sizes. This issue primarily stems from the model being trained on pairs of single-scale images and their corresponding text descriptions. Moreover, direct training on images of unlimited sizes is unfeasible, as it would require an immense number of text-image pairs and entail substantial computational expenses. To overcome these challenges, we propose a two-stage pipeline named Any-Size-Diffusion (ASD), designed to efficiently generate well-composed images of any size, while minimizing the need for high-memory GPU resources. Specifically, the initial stage, dubbed Any Ratio Adaptability Diffusion (ARAD), leverages a selected set of images with a restricted range of ratios to optimize the text-conditional diffusion model, thereby improving its ability to adjust composition to accommodate diverse image sizes. To support the creation of images at any desired size, we further introduce a technique called Fast Seamless Tiled Diffusion (FSTD) at the subsequent stage. This method allows for the rapid enlargement of the ASD output to any high-resolution size, avoiding seaming artifacts or memory overloads. Experimental results on the LAION-COCO and MM-CelebA-HQ benchmarks demonstrate that ASD can produce well-structured images of arbitrary sizes, cutting down the inference time by 2x compared to the traditional tiled algorithm.