Lazy Diffusion Transformer for Interactive Image Editing

Authors: Yotam Nitzan, Zongze Wu, Richard Zhang, Eli Shechtman, Daniel Cohen-Or, Taesung Park, Michaël Gharbi

What

This paper introduces Gazelle, a novel diffusion transformer model designed for efficient partial image generation, particularly targeting interactive image editing applications like inpainting.

Why

This work is important as it addresses the inefficiency of traditional inpainting methods that regenerate the entire image, even when editing small portions. Gazelle offers a significant speedup for localized edits while maintaining global consistency, making diffusion models more practical for interactive workflows.

How

The authors propose a two-stage approach: 1) A context encoder processes the entire image and mask to extract a compact global context specific to the masked region. 2) A diffusion-based transformer decoder iteratively generates only the masked pixels, conditioned on this context and the user’s text prompt. This approach ensures global coherence while significantly reducing computational cost by focusing solely on the area of interest.

Result

Gazelle achieves a speedup of up to 10x compared to full-image inpainting methods for masks covering 10% of the image. It demonstrates competitive quality with state-of-the-art inpainting models, especially in scenarios requiring high semantic context, indicating the effectiveness of its compressed context representation. User studies confirm a strong preference for Gazelle over crop-based methods and comparable preference to full-image methods.

LF

The authors acknowledge limitations regarding the context encoder’s quadratic scaling with input size, potentially limiting scalability to ultra-high-resolution images. They also identify occasional color inconsistencies between generated and visible regions. Future work could explore more efficient context encoding mechanisms and more principled solutions for seamless blending.

Abstract

We introduce a novel diffusion transformer, LazyDiffusion, that generates partial image updates efficiently. Our approach targets interactive image editing applications in which, starting from a blank canvas or an image, a user specifies a sequence of localized image modifications using binary masks and text prompts. Our generator operates in two phases. First, a context encoder processes the current canvas and user mask to produce a compact global context tailored to the region to generate. Second, conditioned on this context, a diffusion-based transformer decoder synthesizes the masked pixels in a “lazy” fashion, i.e., it only generates the masked region. This contrasts with previous works that either regenerate the full canvas, wasting time and computation, or confine processing to a tight rectangular crop around the mask, ignoring the global image context altogether. Our decoder’s runtime scales with the mask size, which is typically small, while our encoder introduces negligible overhead. We demonstrate that our approach is competitive with state-of-the-art inpainting methods in terms of quality and fidelity while providing a 10x speedup for typical user interactions, where the editing mask represents 10% of the image.