Real-Time Neural Rasterization for Large Scenes
- URL: http://arxiv.org/abs/2311.05607v1
- Date: Thu, 9 Nov 2023 18:59:10 GMT
- Title: Real-Time Neural Rasterization for Large Scenes
- Authors: Jeffrey Yunfan Liu, Yun Chen, Ze Yang, Jingkang Wang, Sivabalan
Manivasagam, Raquel Urtasun
- Abstract summary: We propose a new method for realistic real-time novel-view synthesis of large scenes.
Existing neural rendering methods generate realistic results, but primarily work for small scale scenes.
Our work is the first to enable real-time rendering of large real-world scenes.
- Score: 39.198327570559684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new method for realistic real-time novel-view synthesis (NVS) of
large scenes. Existing neural rendering methods generate realistic results, but
primarily work for small scale scenes (<50 square meters) and have difficulty
at large scale (>10000 square meters). Traditional graphics-based rasterization
rendering is fast for large scenes but lacks realism and requires expensive
manually created assets. Our approach combines the best of both worlds by
taking a moderate-quality scaffold mesh as input and learning a neural texture
field and shader to model view-dependant effects to enhance realism, while
still using the standard graphics pipeline for real-time rendering. Our method
outperforms existing neural rendering methods, providing at least 30x faster
rendering with comparable or better realism for large self-driving and drone
scenes. Our work is the first to enable real-time rendering of large real-world
scenes.
Related papers
- City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web [26.92522314818356]
City-on-Web is the first method for real-time rendering of large-scale scenes on the web.
Our system achieves real-time rendering of large-scale scenes at approximately 32FPS with GTX 3060 GPU on the web.
arXiv Detail & Related papers (2023-12-27T08:00:47Z) - ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering [62.81677824868519]
We propose an animatable Gaussian splatting approach for photorealistic rendering of dynamic humans in real-time.
We parameterize the clothed human as animatable 3D Gaussians, which can be efficiently splatted into image space to generate the final rendering.
We benchmark ASH with competing methods on pose-controllable avatars, demonstrating that our method outperforms existing real-time methods by a large margin and shows comparable or even better results than offline methods.
arXiv Detail & Related papers (2023-12-10T17:07:37Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices [53.28220984270622]
We present an implicit textured $textbfSurf$ace reconstruction method on mobile devices.
Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.
Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - Self-supervised novel 2D view synthesis of large-scale scenes with
efficient multi-scale voxel carving [77.07589573960436]
We introduce an efficient multi-scale voxel carving method to generate novel views of real scenes.
Our final high-resolution output is efficiently self-trained on data automatically generated by the voxel carving module.
We demonstrate the effectiveness of our method on highly complex and large-scale scenes in real environments.
arXiv Detail & Related papers (2023-06-26T13:57:05Z) - Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - Efficient Meshy Neural Fields for Animatable Human Avatars [87.68529918184494]
Efficiently digitizing high-fidelity animatable human avatars from videos is a challenging and active research topic.
Recent rendering-based neural representations open a new way for human digitization with their friendly usability and photo-varying reconstruction quality.
We present EMA, a method that Efficiently learns Meshy neural fields to reconstruct animatable human Avatars.
arXiv Detail & Related papers (2023-03-23T00:15:34Z) - Neural Assets: Volumetric Object Capture and Rendering for Interactive
Environments [8.258451067861932]
We propose an approach for capturing real-world objects in everyday environments faithfully and fast.
We use a novel neural representation to reconstruct effects, such as translucent object parts, and preserve object appearance.
This leads to a seamless integration of the proposed neural assets with existing mesh environments and objects.
arXiv Detail & Related papers (2022-12-12T18:55:03Z) - Real-time Virtual-Try-On from a Single Example Image through Deep
Inverse Graphics and Learned Differentiable Renderers [13.894134334543363]
We propose a novel framework based on deep learning to build a real-time inverse graphics encoder.
Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable image.
Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image.
arXiv Detail & Related papers (2022-05-12T18:44:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.