An Image is Worth Multiple Words: Multi-attribute Inversion for Constrained Text-to-Image Synthesis
Authors: Aishwarya Agarwal, Srikrishna Karanam, Tripti Shukla, Balaji Vasan Srinivasan
What
This paper presents MATTE, a novel multi-attribute inversion algorithm for text-to-image diffusion models, enabling the extraction and disentanglement of color, object, layout, and style attributes from a single reference image for controlled image synthesis.
Why
This work is significant because it addresses the limitations of existing inversion methods that struggle to disentangle multiple visual attributes from a reference image. By learning disentangled tokens for color, object, layout, and style, MATTE enables more fine-grained control over image generation conditioned on a reference.
How
The authors first conduct an extensive analysis of attribute distribution across layers and timesteps in the diffusion process. Informed by this analysis, they propose MATTE, which learns separate tokens for each attribute and trains them to influence specific layers or timesteps, thus achieving disentanglement. They introduce a novel loss function that encourages reconstruction fidelity while enforcing disentanglement among color, style, object, and layout.
Result
MATTE demonstrates superior performance in extracting and transferring individual attributes and their combinations from a reference image to new generations. Qualitative results showcase its ability to control color, object, layout, and style independently, outperforming existing methods like P+ and ProSpect. Quantitative evaluations using CLIP similarity scores further validate the effectiveness of MATTE in learning disentangled and semantically meaningful attribute tokens.
LF
The paper acknowledges limitations in terms of computational cost for the inversion process. Additionally, it recognizes that the final generation quality is limited by the base diffusion model’s capabilities. Future work could focus on optimizing the efficiency of the inversion algorithm and exploring alternative methods to improve attribute control during generation, such as fine-tuning model weights.
Abstract
We consider the problem of constraining diffusion model outputs with a user-supplied reference image. Our key objective is to extract multiple attributes (e.g., color, object, layout, style) from this single reference image, and then generate new samples with them. One line of existing work proposes to invert the reference images into a single textual conditioning vector, enabling generation of new samples with this learned token. These methods, however, do not learn multiple tokens that are necessary to condition model outputs on the multiple attributes noted above. Another line of techniques expand the inversion space to learn multiple embeddings but they do this only along the layer dimension (e.g., one per layer of the DDPM model) or the timestep dimension (one for a set of timesteps in the denoising process), leading to suboptimal attribute disentanglement. To address the aforementioned gaps, the first contribution of this paper is an extensive analysis to determine which attributes are captured in which dimension of the denoising process. As noted above, we consider both the time-step dimension (in reverse denoising) as well as the DDPM model layer dimension. We observe that often a subset of these attributes are captured in the same set of model layers and/or across same denoising timesteps. For instance, color and style are captured across same U-Net layers, whereas layout and color are captured across same timestep stages. Consequently, an inversion process that is designed only for the time-step dimension or the layer dimension is insufficient to disentangle all attributes. This leads to our second contribution where we design a new multi-attribute inversion algorithm, MATTE, with associated disentanglement-enhancing regularization losses, that operates across both dimensions and explicitly leads to four disentangled tokens (color, style, layout, and object).