Prompt2NeRF-PIL: Fast NeRF Generation via Pretrained Implicit Latent
- URL: http://arxiv.org/abs/2312.02568v1
- Date: Tue, 5 Dec 2023 08:32:46 GMT
- Title: Prompt2NeRF-PIL: Fast NeRF Generation via Pretrained Implicit Latent
- Authors: Jianmeng Liu, Yuyao Zhang, Zeyuan Meng, Yu-Wing Tai, Chi-Keung Tang
- Abstract summary: This paper explores promptable NeRF generation for direct conditioning and fast generation of NeRF parameters for the underlying 3D scenes.
Prompt2NeRF-PIL is capable of generating a variety of 3D objects with a single forward pass.
We will show that our approach speeds up the text-to-NeRF model DreamFusion and the 3D reconstruction speed of the image-to-NeRF method Zero-1-to-3 by 3 to 5 times.
- Score: 61.56387277538849
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores promptable NeRF generation (e.g., text prompt or single
image prompt) for direct conditioning and fast generation of NeRF parameters
for the underlying 3D scenes, thus undoing complex intermediate steps while
providing full 3D generation with conditional control. Unlike previous
diffusion-CLIP-based pipelines that involve tedious per-prompt optimizations,
Prompt2NeRF-PIL is capable of generating a variety of 3D objects with a single
forward pass, leveraging a pre-trained implicit latent space of NeRF
parameters. Furthermore, in zero-shot tasks, our experiments demonstrate that
the NeRFs produced by our method serve as semantically informative
initializations, significantly accelerating the inference process of existing
prompt-to-NeRF methods. Specifically, we will show that our approach speeds up
the text-to-NeRF model DreamFusion and the 3D reconstruction speed of the
image-to-NeRF method Zero-1-to-3 by 3 to 5 times.
Related papers
- Spatial Annealing Smoothing for Efficient Few-shot Neural Rendering [106.0057551634008]
We introduce an accurate and efficient few-shot neural rendering method named Spatial Annealing smoothing regularized NeRF (SANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot NeRF methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental
Learning [23.080474939586654]
We propose a novel underlinecamera parameter underlinefree neural radiance field (CF-NeRF)
CF-NeRF incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion.
Results demonstrate that CF-NeRF is robust to camera rotation and achieves state-of-the-art results without providing prior information and constraints.
arXiv Detail & Related papers (2023-12-14T09:09:31Z) - A General Implicit Framework for Fast NeRF Composition and Rendering [40.07666955244417]
We propose a general implicit pipeline for composing NeRF objects quickly.
Our work introduces a new surface representation known as Neural Depth Fields (NeDF)
It leverages an intersection neural network to query NeRF for acceleration instead of depending on an explicit spatial structure.
arXiv Detail & Related papers (2023-08-09T02:27:23Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - 3D-CLFusion: Fast Text-to-3D Rendering with Contrastive Latent Diffusion [55.71215821923401]
We tackle the task of text-to-3D creation with pre-trained latent-based NeRFs (NeRFs that generate 3D objects given input latent code)
We propose a novel method named 3D-CLFusion which leverages the pre-trained latent-based NeRFs and performs fast 3D content creation in less than a minute.
arXiv Detail & Related papers (2023-03-21T15:38:26Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.