Attention Calibration for Disentangled Text-to-Image Personalization

Authors: Yanbing Zhang, Mengping Yang, Qin Zhou, Zhe Wang

What

This paper introduces DisenDiff, a personalized text-to-image generation model that can learn multiple concepts from a single image and generate novel images with those concepts in different contexts.

Why

The paper addresses a key limitation in existing personalized text-to-image models, which struggle to capture multiple distinct concepts from a single reference image. This is important because it allows for more flexible and creative image generation from a limited amount of input data.

How

The authors propose an attention calibration mechanism for a text-to-image diffusion model. This involves introducing new learnable modifiers bound to classes to capture distinct concepts and then applying constraints within the cross-attention mechanism to ensure accurate and disentangled representation of each concept.

Result

DisenDiff outperforms state-of-the-art methods in both qualitative and quantitative evaluations, demonstrating superior image fidelity and concept disentanglement. The authors also showcase its flexibility in applications like personalized concept inpainting and integration with LoRA for enhanced texture details.

LF

The authors acknowledge limitations in disentangling fine-grained categories within the same class (e.g., dog breeds) and handling images with more than three concepts. Future work could explore algorithms tailored to these scenarios and address the limitations of existing text-to-image models when dealing with a higher number of concepts.

Abstract

Recent thrilling progress in large-scale text-to-image (T2I) models has unlocked unprecedented synthesis quality of AI-generated content (AIGC) including image generation, 3D and video composition. Further, personalized techniques enable appealing customized production of a novel concept given only several images as reference. However, an intriguing problem persists: Is it possible to capture multiple, novel concepts from one single reference image? In this paper, we identify that existing approaches fail to preserve visual consistency with the reference image and eliminate cross-influence from concepts. To alleviate this, we propose an attention calibration mechanism to improve the concept-level understanding of the T2I model. Specifically, we first introduce new learnable modifiers bound with classes to capture attributes of multiple concepts. Then, the classes are separated and strengthened following the activation of the cross-attention operation, ensuring comprehensive and self-contained concepts. Additionally, we suppress the attention activation of different classes to mitigate mutual influence among concepts. Together, our proposed method, dubbed DisenDiff, can learn disentangled multiple concepts from one single image and produce novel customized images with learned concepts. We demonstrate that our method outperforms the current state of the art in both qualitative and quantitative evaluations. More importantly, our proposed techniques are compatible with LoRA and inpainting pipelines, enabling more interactive experiences.