UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale
Scene
- URL: http://arxiv.org/abs/2310.13263v1
- Date: Fri, 20 Oct 2023 04:01:35 GMT
- Title: UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale
Scene
- Authors: Jiaming Gu, Minchao Jiang, Hongsheng Li, Xiaoyuan Lu, Guangming Zhu,
Syed Afaq Ali Shah, Liang Zhang, Mohammed Bennamoun
- Abstract summary: We propose a novel neural rendering system called UE4-NeRF, specifically designed for real-time rendering of large-scale scenes.
Our approach combines with the Unrealization pipeline in Unreal Engine 4 (UE4), achieving real-time rendering of large-scale scenes at 4K resolution with a frame rate of up to 43 FPS.
- Score: 52.21184153832739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) is a novel implicit 3D reconstruction method
that shows immense potential and has been gaining increasing attention. It
enables the reconstruction of 3D scenes solely from a set of photographs.
However, its real-time rendering capability, especially for interactive
real-time rendering of large-scale scenes, still has significant limitations.
To address these challenges, in this paper, we propose a novel neural rendering
system called UE4-NeRF, specifically designed for real-time rendering of
large-scale scenes. We partitioned each large scene into different sub-NeRFs.
In order to represent the partitioned independent scene, we initialize
polygonal meshes by constructing multiple regular octahedra within the scene
and the vertices of the polygonal faces are continuously optimized during the
training process. Drawing inspiration from Level of Detail (LOD) techniques, we
trained meshes of varying levels of detail for different observation levels.
Our approach combines with the rasterization pipeline in Unreal Engine 4 (UE4),
achieving real-time rendering of large-scale scenes at 4K resolution with a
frame rate of up to 43 FPS. Rendering within UE4 also facilitates scene editing
in subsequent stages. Furthermore, through experiments, we have demonstrated
that our method achieves rendering quality comparable to state-of-the-art
approaches. Project page: https://jamchaos.github.io/UE4-NeRF/.
Related papers
- Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models [54.35214051961381]
3D meshes are widely used in computer vision and graphics for their efficiency in animation and minimal memory use in movies, games, AR, and VR.
However, creating temporal consistent and realistic textures for mesh remains labor-intensive for professional artists.
We present 3D Tex sequences that integrates inherent geometry from mesh sequences with video diffusion models to produce consistent textures.
arXiv Detail & Related papers (2024-10-14T17:59:59Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - 4K4D: Real-Time 4D View Synthesis at 4K Resolution [86.6582179227016]
This paper targets high-fidelity and real-time view of dynamic 3D scenes at 4K resolution.
We propose a 4D point cloud representation that supports hardwareization and enables unprecedented rendering speed.
Our representation can be rendered at over 400 FPS on the DNA-Rendering dataset at 1080p resolution and 80 FPS on the ENeRF-Outdoor dataset at 4K resolution using an 4090 GPU.
arXiv Detail & Related papers (2023-10-17T17:57:38Z) - Real-time Photorealistic Dynamic Scene Representation and Rendering with
4D Gaussian Splatting [8.078460597825142]
Reconstructing dynamic 3D scenes from 2D images and generating diverse views over time is challenging due to scene complexity and temporal dynamics.
We propose to approximate the underlying-temporal rendering volume of a dynamic scene by optimizing a collection of 4D primitives, with explicit geometry and appearance modeling.
Our model is conceptually simple, consisting of a 4D Gaussian parameterized by anisotropic ellipses that can rotate arbitrarily in space and time, as well as view-dependent and time-evolved appearance represented by the coefficient of 4D spherindrical harmonics.
arXiv Detail & Related papers (2023-10-16T17:57:43Z) - NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields [99.57774680640581]
We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
arXiv Detail & Related papers (2022-10-28T07:11:05Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.