VideoBooth: Diffusion-based Video Generation with Image Prompts

Authors: Yuming Jiang, Tianxing Wu, Shuai Yang, Chenyang Si, Dahua Lin, Yu Qiao, Chen Change Loy, Ziwei Liu

What

This paper introduces VideoBooth, a novel framework for generating videos using both text prompts and image prompts for customized content creation.

Why

This paper is important because it addresses limitations in text-driven video generation by incorporating image prompts for more precise control over subject appearance, which is crucial for customized content creation.

How

The authors propose a coarse-to-fine visual embedding strategy: 1) A CLIP image encoder extracts coarse visual embeddings from image prompts, capturing high-level semantic information. 2) Fine visual embeddings are extracted through an attention injection module, incorporating multi-scale image prompts into cross-frame attention layers for refining details and maintaining temporal consistency. The authors also created a dedicated VideoBooth dataset for training and evaluating their model.

Result

VideoBooth demonstrates state-of-the-art performance in generating high-quality, customized videos, effectively preserving visual attributes from image prompts while maintaining alignment with text prompts. Ablation studies confirm the effectiveness of the coarse-to-fine training strategy and both embedding modules.

LF

The authors acknowledge the potential negative societal impact of generating fake videos and suggest exploring advanced fake video detection methods as future work. Additionally, processing the full WebVid dataset and expanding the VideoBooth dataset is mentioned as future work.

Abstract

Text-driven video generation witnesses rapid progress. However, merely using text prompts is not enough to depict the desired subject appearance that accurately aligns with users’ intents, especially for customized content creation. In this paper, we study the task of video generation with image prompts, which provide more accurate and direct content control beyond the text prompts. Specifically, we propose a feed-forward framework VideoBooth, with two dedicated designs: 1) We propose to embed image prompts in a coarse-to-fine manner. Coarse visual embeddings from image encoder provide high-level encodings of image prompts, while fine visual embeddings from the proposed attention injection module provide multi-scale and detailed encoding of image prompts. These two complementary embeddings can faithfully capture the desired appearance. 2) In the attention injection module at fine level, multi-scale image prompts are fed into different cross-frame attention layers as additional keys and values. This extra spatial information refines the details in the first frame and then it is propagated to the remaining frames, which maintains temporal consistency. Extensive experiments demonstrate that VideoBooth achieves state-of-the-art performance in generating customized high-quality videos with subjects specified in image prompts. Notably, VideoBooth is a generalizable framework where a single model works for a wide range of image prompts with feed-forward pass.