Connecting NeRFs, Images, and Text
Authors: Francesco Ballerini, Pierluigi Zama Ramirez, Roberto Mirabella, Samuele Salti, Luigi Di Stefano
What
This paper introduces a novel framework connecting Neural Radiance Fields (NeRFs) with other modalities like text and images, enabling applications such as zero-shot NeRF classification and NeRF retrieval from images or text.
Why
This research is significant as it explores NeRFs as a data format and bridges the gap between NeRFs and existing multimodal representation learning techniques for images and text, opening up new possibilities for 3D scene understanding and interaction.
How
The authors propose a framework that leverages pre-trained models like CLIP for multimodal embeddings and NF2Vec for NeRF embeddings. They train two MLPs to learn bidirectional mappings between these embedding spaces, enabling the connection between NeRFs, images, and text.
Result
The framework achieves promising results on tasks like zero-shot NeRF classification, outperforming baselines relying on rendered images. It also demonstrates strong performance in NeRF retrieval from both images and text, highlighting the effectiveness of the learned mappings. Notably, the authors propose an adaptation technique using ControlNet to improve performance on real images when trained solely on synthetic data.
LF
The paper acknowledges limitations regarding the current focus on synthetic objects due to the NF2Vec encoder’s training data and the generation capabilities being restricted by the NF2Vec decoder. Future work aims to extend the framework to real-world scenes and objects, explore larger datasets, and investigate joint training of encoders for a shared latent space.
Abstract
Neural Radiance Fields (NeRFs) have emerged as a standard framework for representing 3D scenes and objects, introducing a novel data type for information exchange and storage. Concurrently, significant progress has been made in multimodal representation learning for text and image data. This paper explores a novel research direction that aims to connect the NeRF modality with other modalities, similar to established methodologies for images and text. To this end, we propose a simple framework that exploits pre-trained models for NeRF representations alongside multimodal models for text and image processing. Our framework learns a bidirectional mapping between NeRF embeddings and those obtained from corresponding images and text. This mapping unlocks several novel and useful applications, including NeRF zero-shot classification and NeRF retrieval from images or text.