4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions
- URL: http://arxiv.org/abs/2212.04701v2
- Date: Tue, 4 Apr 2023 02:42:35 GMT
- Title: 4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions
- Authors: Zhongshu Wang, Lingzhi Li, Zhen Shen, Li Shen, Liefeng Bo
- Abstract summary: We present a novel and effective framework, named 4K-NeRF, to pursue high fidelity view synthesis on the challenging scenarios of ultra high resolutions.
We address the issue by exploring ray correlation to enhance high-frequency details recovery.
Our method can significantly boost rendering quality on high-frequency details compared with modern NeRF methods, and achieve the state-of-the-art visual quality on 4K ultra-high-resolution scenarios.
- Score: 19.380248980850727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a novel and effective framework, named 4K-NeRF, to
pursue high fidelity view synthesis on the challenging scenarios of ultra high
resolutions, building on the methodology of neural radiance fields (NeRF). The
rendering procedure of NeRF-based methods typically relies on a pixel-wise
manner in which rays (or pixels) are treated independently on both training and
inference phases, limiting its representational ability on describing subtle
details, especially when lifting to a extremely high resolution. We address the
issue by exploring ray correlation to enhance high-frequency details recovery.
Particularly, we use the 3D-aware encoder to model geometric information
effectively in a lower resolution space and recover fine details through the
3D-aware decoder, conditioned on ray features and depths estimated by the
encoder. Joint training with patch-based sampling further facilitates our
method incorporating the supervision from perception oriented regularization
beyond pixel-wise loss. Benefiting from the use of geometry-aware local
context, our method can significantly boost rendering quality on high-frequency
details compared with modern NeRF methods, and achieve the state-of-the-art
visual quality on 4K ultra-high-resolution scenarios. Code Available at
\url{https://github.com/frozoul/4K-NeRF}
Related papers
- SuperNeRF-GAN: A Universal 3D-Consistent Super-Resolution Framework for Efficient and Enhanced 3D-Aware Image Synthesis [59.73403876485574]
We propose SuperNeRF-GAN, a universal framework for 3D-consistent super-resolution.
A key highlight of SuperNeRF-GAN is its seamless integration with NeRF-based 3D-aware image synthesis methods.
Experimental results demonstrate the superior efficiency, 3D-consistency, and quality of our approach.
arXiv Detail & Related papers (2025-01-12T10:31:33Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Hyb-NeRF: A Multiresolution Hybrid Encoding for Neural Radiance Fields [12.335934855851486]
We present Hyb-NeRF, a novel neural radiance field with a multi-resolution hybrid encoding.
We show that Hyb-NeRF achieves faster rendering speed with better rending quality and even a lower memory footprint in comparison to previous methods.
arXiv Detail & Related papers (2023-11-21T10:01:08Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.
Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.
Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene
Reconstruction from Blurry Images [75.87721926918874]
We present Progressively Deblurring Radiance Field (PDRF)
PDRF is a novel approach to efficiently reconstruct high quality radiance fields from blurry images.
We show that PDRF is 15X faster than previous State-of-The-Art scene reconstruction methods.
arXiv Detail & Related papers (2022-08-17T03:42:29Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - A Novel Unified Model for Multi-exposure Stereo Coding Based on Low Rank
Tucker-ALS and 3D-HEVC [0.6091702876917279]
We propose an efficient scheme for coding multi-exposure stereo images based on a tensor low-rank approximation scheme.
The multi-exposure fusion can be realized to generate HDR stereo output at the decoder for increased realism and binocular 3D depth cues.
The encoding with 3D-HEVC enhance the proposed scheme efficiency by exploiting intra-frame, inter-view and the inter-component redundancies in lowrank approximated representation.
arXiv Detail & Related papers (2021-04-10T10:10:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.