Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement
Authors: Chenghao Li, Dake Chen, Yuke Zhang, Peter A. Beerel
What
This paper tackles the privacy issue of data replication in diffusion models by proposing a method to quantify caption generality and a novel dual fusion enhancement training approach.
Why
This paper is significant as it addresses the growing privacy concerns regarding diffusion models replicating training data, which is crucial for the responsible development and deployment of such models.
How
The authors introduce a “generality score” to measure caption generality and utilize LLMs to generate more general captions. They then propose a dual fusion enhancement approach that fuses specific object features with the original image in latent space and combines corresponding label embeddings with the caption. They evaluate their methods by fine-tuning Stable Diffusion v2.1 on a subset of LAION-2B and measuring replication score and FID.
Result
The proposed method significantly reduces replication by 43.5% compared to the baseline and outperforms other mitigation strategies while maintaining comparable generation quality and diversity. The paper also shows that using generalized captions generated by LLMs effectively reduces replication.
LF
The paper acknowledges a trade-off between reducing replication and maintaining image generation quality. Future work includes exploring the use of the generality score to guide caption generalization and iteratively enhance caption generality.
Abstract
While diffusion models demonstrate a remarkable capability for generating high-quality images, their tendency to `replicate’ training data raises privacy concerns. Although recent research suggests that this replication may stem from the insufficient generalization of training data captions and duplication of training images, effective mitigation strategies remain elusive. To address this gap, our paper first introduces a generality score that measures the caption generality and employ large language model (LLM) to generalize training captions. Subsequently, we leverage generalized captions and propose a novel dual fusion enhancement approach to mitigate the replication of diffusion models. Our empirical results demonstrate that our proposed methods can significantly reduce replication by 43.5% compared to the original diffusion model while maintaining the diversity and quality of generations. Code is available at https://github.com/HowardLi0816/dual-fusion-diffusion.