Urban Radiance Field Representation with Deformable Neural Mesh
Primitives
- URL: http://arxiv.org/abs/2307.10776v1
- Date: Thu, 20 Jul 2023 11:24:55 GMT
- Title: Urban Radiance Field Representation with Deformable Neural Mesh
Primitives
- Authors: Fan Lu, Yan Xu, Guang Chen, Hongsheng Li, Kwan-Yee Lin, Changjun Jiang
- Abstract summary: Deformable Neural Mesh Primitive(DNMP) is a flexible and compact neural variant of classic mesh representation.
Our representation enables fast rendering (2.07ms/1k pixels) and low peak memory usage (110MB/1k pixels)
We present a lightweight version that can run 33$times$ faster than vanilla NeRFs, and comparable to the highly-optimized Instant-NGP (0.61 vs 0.71ms/1k pixels)
- Score: 41.104140341641006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRFs) have achieved great success in the past few
years. However, most current methods still require intensive resources due to
ray marching-based rendering. To construct urban-level radiance fields
efficiently, we design Deformable Neural Mesh Primitive~(DNMP), and propose to
parameterize the entire scene with such primitives. The DNMP is a flexible and
compact neural variant of classic mesh representation, which enjoys both the
efficiency of rasterization-based rendering and the powerful neural
representation capability for photo-realistic image synthesis. Specifically, a
DNMP consists of a set of connected deformable mesh vertices with paired vertex
features to parameterize the geometry and radiance information of a local area.
To constrain the degree of freedom for optimization and lower the storage
budgets, we enforce the shape of each primitive to be decoded from a relatively
low-dimensional latent space. The rendering colors are decoded from the vertex
features (interpolated with rasterization) by a view-dependent MLP. The DNMP
provides a new paradigm for urban-level scene representation with appealing
properties: $(1)$ High-quality rendering. Our method achieves leading
performance for novel view synthesis in urban scenarios. $(2)$ Low
computational costs. Our representation enables fast rendering (2.07ms/1k
pixels) and low peak memory usage (110MB/1k pixels). We also present a
lightweight version that can run 33$\times$ faster than vanilla NeRFs, and
comparable to the highly-optimized Instant-NGP (0.61 vs 0.71ms/1k pixels).
Project page: \href{https://dnmp.github.io/}{https://dnmp.github.io/}.
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Efficient Encoding of Graphics Primitives with Simplex-based Structures [0.8158530638728501]
We propose a simplex-based approach for encoding graphics primitives.
In the 2D image fitting task, the proposed method is capable of fitting an image with 9.4% less time compared to the baseline method.
arXiv Detail & Related papers (2023-11-26T21:53:22Z) - Dynamic PlenOctree for Adaptive Sampling Refinement in Explicit NeRF [6.135925201075925]
We propose the dynamic PlenOctree DOT, which adaptively refines the sample distribution to adjust to changing scene complexity.
Compared with POT, our DOT outperforms it by enhancing visual quality, reducing over $55.15$/$68.84%$ parameters, and providing 1.7/1.9 times FPS for NeRF-synthetic and Tanks $&$ Temples, respectively.
arXiv Detail & Related papers (2023-07-28T06:21:42Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis [63.25919018001152]
We propose a fast deformable radiance field method to handle dynamic scenes.
Our method achieves comparable performance to D-NeRF using only 20 minutes for training.
arXiv Detail & Related papers (2022-06-15T17:49:08Z) - View Synthesis with Sculpted Neural Points [64.40344086212279]
Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency.
We propose a new approach that performs view synthesis using point clouds.
It is the first point-based method to achieve better visual quality than NeRF while being more than 100x faster in rendering speed.
arXiv Detail & Related papers (2022-05-12T03:54:35Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Efficient Neural Radiance Fields with Learned Depth-Guided Sampling [43.79307270743013]
We present a hybrid scene representation which combines the best of implicit radiance fields and explicit depth maps for efficient rendering.
Experiments show that the proposed approach exhibits state-of-the-art performance on the DTU, Real Forward-facing and NeRF Synthetic datasets.
We also demonstrate the capability of our method to synthesize free-viewpoint videos of dynamic human performers in real-time.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - Spatial-Separated Curve Rendering Network for Efficient and
High-Resolution Image Harmonization [59.19214040221055]
We propose a novel spatial-separated curve rendering network (S$2$CRNet) for efficient and high-resolution image harmonization.
The proposed method reduces more than 90% parameters compared with previous methods.
Our method can work smoothly on higher resolution images in real-time which is more than 10$times$ faster than the existing methods.
arXiv Detail & Related papers (2021-09-13T07:20:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.