Viewpoint Textual Inversion: Unleashing Novel View Synthesis with Pretrained 2D Diffusion Models

Authors: James Burgess, Kuan-Chieh Wang, Serena Yeung

What

This paper introduces Viewpoint Neural Textual Inversion (ViewNeTI), a method for controlling the viewpoint of objects in images generated by text-to-image diffusion models, enabling novel view synthesis from as little as a single input view.

Why

This paper is important because it demonstrates that 2D diffusion models, despite being trained on unposed images, encode 3D structural knowledge that can be leveraged for 3D vision tasks like novel view synthesis, even with very limited 3D supervision.

How

The authors train a small neural network, the view-mapper, to predict text encoder latents based on camera viewpoint parameters. These latents, along with object-specific latents, condition a frozen diffusion model (Stable Diffusion) to generate images from desired viewpoints. They explore single-scene training for viewpoint interpolation and multi-scene pretraining for generalization to novel scenes and single-view synthesis.

Result

ViewNeTI achieves impressive results for novel view synthesis, especially in the challenging single-view setting. It generates photorealistic images with plausible semantics, outperforming baselines in terms of visual quality and certain metrics like LPIPS. The method also demonstrates potential for viewpoint control in text-to-image generation.

LF

The paper acknowledges limitations in object localization, which affects PSNR scores, and struggles with generating precise object details. Future work could address these limitations, explore faster inference for object token optimization, and investigate applying the framework to other 3D tasks like relighting and 2D-to-3D lifting.

Abstract

Text-to-image diffusion models understand spatial relationship between objects, but do they represent the true 3D structure of the world from only 2D supervision? We demonstrate that yes, 3D knowledge is encoded in 2D image diffusion models like Stable Diffusion, and we show that this structure can be exploited for 3D vision tasks. Our method, Viewpoint Neural Textual Inversion (ViewNeTI), controls the 3D viewpoint of objects in generated images from frozen diffusion models. We train a small neural mapper to take camera viewpoint parameters and predict text encoder latents; the latents then condition the diffusion generation process to produce images with the desired camera viewpoint. ViewNeTI naturally addresses Novel View Synthesis (NVS). By leveraging the frozen diffusion model as a prior, we can solve NVS with very few input views; we can even do single-view novel view synthesis. Our single-view NVS predictions have good semantic details and photorealism compared to prior methods. Our approach is well suited for modeling the uncertainty inherent in sparse 3D vision problems because it can efficiently generate diverse samples. Our view-control mechanism is general, and can even change the camera view in images generated by user-defined prompts.