City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web
- URL: http://arxiv.org/abs/2312.16457v2
- Date: Mon, 1 Apr 2024 03:10:53 GMT
- Title: City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web
- Authors: Kaiwen Song, Xiaoyi Zeng, Chenqu Ren, Juyong Zhang,
- Abstract summary: City-on-Web is the first method for real-time rendering of large-scale scenes on the web.
Our system achieves real-time rendering of large-scale scenes at approximately 32FPS with GTX 3060 GPU on the web.
- Score: 26.92522314818356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing neural radiance field-based methods can achieve real-time rendering of small scenes on the web platform. However, extending these methods to large-scale scenes still poses significant challenges due to limited resources in computation, memory, and bandwidth. In this paper, we propose City-on-Web, the first method for real-time rendering of large-scale scenes on the web. We propose a block-based volume rendering method to guarantee 3D consistency and correct occlusion between blocks, and introduce a Level-of-Detail strategy combined with dynamic loading/unloading of resources to significantly reduce memory demands. Our system achieves real-time rendering of large-scale scenes at approximately 32FPS with RTX 3060 GPU on the web and maintains rendering quality comparable to the current state-of-the-art novel view synthesis methods.
Related papers
- Plenoptic PNG: Real-Time Neural Radiance Fields in 150 KB [29.267039546199094]
This paper aims to encode a 3D scene into an extremely compact representation from 2D images.
It enables its transmittance, decoding and rendering in real-time across various platforms.
arXiv Detail & Related papers (2024-09-24T03:06:22Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices [53.28220984270622]
We present an implicit textured $textbfSurf$ace reconstruction method on mobile devices.
Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.
Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - Real-Time Neural Rasterization for Large Scenes [39.198327570559684]
We propose a new method for realistic real-time novel-view synthesis of large scenes.
Existing neural rendering methods generate realistic results, but primarily work for small scale scenes.
Our work is the first to enable real-time rendering of large real-world scenes.
arXiv Detail & Related papers (2023-11-09T18:59:10Z) - UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale
Scene [52.21184153832739]
We propose a novel neural rendering system called UE4-NeRF, specifically designed for real-time rendering of large-scale scenes.
Our approach combines with the Unrealization pipeline in Unreal Engine 4 (UE4), achieving real-time rendering of large-scale scenes at 4K resolution with a frame rate of up to 43 FPS.
arXiv Detail & Related papers (2023-10-20T04:01:35Z) - 3D Gaussian Splatting for Real-Time Radiance Field Rendering [4.320393382724066]
We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times.
We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
arXiv Detail & Related papers (2023-08-08T06:37:06Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - Real-time Neural Radiance Caching for Path Tracing [67.46991813306708]
We present a real-time neural radiance caching method for path-traced global illumination.
Our system is designed to handle fully dynamic scenes, and makes no assumptions about the lighting, geometry, and materials.
We demonstrate significant noise reduction at the cost of little induced bias, and report state-of-the-art, real-time performance on a number of challenging scenarios.
arXiv Detail & Related papers (2021-06-23T13:09:58Z) - Foveated Neural Radiance Fields for Real-Time and Egocentric Virtual
Reality [11.969281058344581]
High-quality 3D graphics requires large volumes of fine-detailed scene data for rendering.
Recent approaches to combat this problem include remote rendering/streaming and neural representations of 3D assets.
We present the first gaze-contingent 3D neural representation and view synthesis method.
arXiv Detail & Related papers (2021-03-30T14:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.