State of the Art on Diffusion Models for Visual Computing

Authors: Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T. Barron, Amit H. Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, C. Karen Liu, Lingjie Liu, Ben Mildenhall, Matthias Nießner, Björn Ommer, Christian Theobalt, Peter Wonka, Gordon Wetzstein

What

This state-of-the-art report provides a comprehensive overview of diffusion models for visual computing, focusing on their applications in generating and editing images, videos, 3D objects, and 4D scenes.

Why

Diffusion models have revolutionized visual computing by enabling unprecedented capabilities for content creation and editing. This report is crucial for researchers, artists, and practitioners to understand the fundamentals, advancements, and open challenges in this rapidly evolving field.

How

The report presents the mathematical foundations of diffusion models, discusses practical implementations using the Stable Diffusion model, and explores conditioning, guidance, inversion, editing, and customization techniques. It then categorizes and summarizes recent advancements in diffusion models for video, 3D, and 4D content generation, highlighting key methodologies and applications.

Result

The report highlights the significant advancements in diffusion models, showcasing their ability to generate realistic and creative content across various modalities. Key findings include the effectiveness of latent diffusion models, score distillation sampling for 3D generation, and the emergence of 4D spatio-temporal diffusion for dynamic scenes.

LF

The report outlines open challenges including: the need for better evaluation metrics, the scarcity of high-quality training data for 3D, video, and 4D content, the computational inefficiency of diffusion models, and the need for improved controllability and user interfaces. Future work may focus on addressing these challenges, exploring new applications, and improving robustness, reproducibility, and ethical considerations.

Abstract

The field of visual computing is rapidly advancing due to the emergence of generative artificial intelligence (AI), which unlocks unprecedented capabilities for the generation, editing, and reconstruction of images, videos, and 3D scenes. In these domains, diffusion models are the generative AI architecture of choice. Within the last year alone, the literature on diffusion-based tools and applications has seen exponential growth and relevant papers are published across the computer graphics, computer vision, and AI communities with new works appearing daily on arXiv. This rapid growth of the field makes it difficult to keep up with all recent developments. The goal of this state-of-the-art report (STAR) is to introduce the basic mathematical concepts of diffusion models, implementation details and design choices of the popular Stable Diffusion model, as well as overview important aspects of these generative AI tools, including personalization, conditioning, inversion, among others. Moreover, we give a comprehensive overview of the rapidly growing literature on diffusion-based generation and editing, categorized by the type of generated medium, including 2D images, videos, 3D objects, locomotion, and 4D scenes. Finally, we discuss available datasets, metrics, open challenges, and social implications. This STAR provides an intuitive starting point to explore this exciting topic for researchers, artists, and practitioners alike.