A General Implicit Framework for Fast NeRF Composition and Rendering
- URL: http://arxiv.org/abs/2308.04669v4
- Date: Thu, 4 Jan 2024 08:00:37 GMT
- Title: A General Implicit Framework for Fast NeRF Composition and Rendering
- Authors: Xinyu Gao, Ziyi Yang, Yunlu Zhao, Yuxiang Sun, Xiaogang Jin, Changqing
Zou
- Abstract summary: We propose a general implicit pipeline for composing NeRF objects quickly.
Our work introduces a new surface representation known as Neural Depth Fields (NeDF)
It leverages an intersection neural network to query NeRF for acceleration instead of depending on an explicit spatial structure.
- Score: 40.07666955244417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A variety of Neural Radiance Fields (NeRF) methods have recently achieved
remarkable success in high render speed. However, current accelerating methods
are specialized and incompatible with various implicit methods, preventing
real-time composition over various types of NeRF works. Because NeRF relies on
sampling along rays, it is possible to provide general guidance for
acceleration. To that end, we propose a general implicit pipeline for composing
NeRF objects quickly. Our method enables the casting of dynamic shadows within
or between objects using analytical light sources while allowing multiple NeRF
objects to be seamlessly placed and rendered together with any arbitrary rigid
transformations. Mainly, our work introduces a new surface representation known
as Neural Depth Fields (NeDF) that quickly determines the spatial relationship
between objects by allowing direct intersection computation between rays and
implicit surfaces. It leverages an intersection neural network to query NeRF
for acceleration instead of depending on an explicit spatial structure.Our
proposed method is the first to enable both the progressive and interactive
composition of NeRF objects. Additionally, it also serves as a previewing
plugin for a range of existing NeRF works.
Related papers
- GMT: Enhancing Generalizable Neural Rendering via Geometry-Driven Multi-Reference Texture Transfer [40.70828307740121]
Novel view synthesis (NVS) aims to generate images at arbitrary viewpoints using multi-view images, and recent insights from neural radiance fields (NeRF) have contributed to remarkable improvements.
G-NeRF still struggles in representing fine details for a specific scene due to the absence of per-scene optimization.
We propose a Geometry-driven Multi-reference Texture transfer network (GMT) available as a plug-and-play module designed for G-NeRF.
arXiv Detail & Related papers (2024-10-01T13:30:51Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Prompt2NeRF-PIL: Fast NeRF Generation via Pretrained Implicit Latent [61.56387277538849]
This paper explores promptable NeRF generation for direct conditioning and fast generation of NeRF parameters for the underlying 3D scenes.
Prompt2NeRF-PIL is capable of generating a variety of 3D objects with a single forward pass.
We will show that our approach speeds up the text-to-NeRF model DreamFusion and the 3D reconstruction speed of the image-to-NeRF method Zero-1-to-3 by 3 to 5 times.
arXiv Detail & Related papers (2023-12-05T08:32:46Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Compressing Explicit Voxel Grid Representations: fast NeRFs become also
small [3.1473798197405944]
Re:NeRF aims to reduce memory storage of NeRF models while maintaining comparable performance.
We benchmark our approach with three different EVG-NeRF architectures on four popular benchmarks.
arXiv Detail & Related papers (2022-10-23T16:42:29Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.