BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
- URL: http://arxiv.org/abs/2302.14859v2
- Date: Tue, 16 May 2023 15:01:42 GMT
- Title: BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
- Authors: Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P.
Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall
- Abstract summary: We present a method for reconstructing high-quality meshes of large real-world scenes suitable for photorealistic novel view synthesis.
We first optimize a hybrid neural volume-surface scene representation designed to have well-behaved level sets that correspond to surfaces in the scene.
We then bake this representation into a high-quality triangle mesh, which we equip with a simple and fast view-dependent appearance model based on spherical Gaussians.
- Score: 42.93055827628597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method for reconstructing high-quality meshes of large unbounded
real-world scenes suitable for photorealistic novel view synthesis. We first
optimize a hybrid neural volume-surface scene representation designed to have
well-behaved level sets that correspond to surfaces in the scene. We then bake
this representation into a high-quality triangle mesh, which we equip with a
simple and fast view-dependent appearance model based on spherical Gaussians.
Finally, we optimize this baked representation to best reproduce the captured
viewpoints, resulting in a model that can leverage accelerated polygon
rasterization pipelines for real-time view synthesis on commodity hardware. Our
approach outperforms previous scene representations for real-time rendering in
terms of accuracy, speed, and power consumption, and produces high quality
meshes that enable applications such as appearance editing and physical
simulation.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - MVSGaussian: Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo [54.00987996368157]
We present MVSGaussian, a new generalizable 3D Gaussian representation approach derived from Multi-View Stereo (MVS)
MVSGaussian achieves real-time rendering with better synthesis quality for each scene.
arXiv Detail & Related papers (2024-05-20T17:59:30Z) - FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces [21.946327323788275]
3D rendering of dynamic face is a challenging problem.
We present a novel representation that enables high-quality rendering of an actor's dynamic facial performances.
arXiv Detail & Related papers (2024-04-22T00:44:13Z) - REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices [51.983541908241726]
This work tackles the challenging task of achieving real-time novel view synthesis for reflective surfaces across various scenes.
Existing real-time rendering methods, especially those based on meshes, often have subpar performance in modeling surfaces with rich view-dependent appearances.
We decompose the color into diffuse and specular, and model the specular color in the reflected direction based on a neural environment map.
arXiv Detail & Related papers (2024-03-25T07:07:50Z) - RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS [47.47003067842151]
We present RadSplat, a lightweight method for robust real-time rendering of complex scenes.
First, we use radiance fields as a prior and supervision signal for optimizing point-based scene representations, leading to improved quality and more robust optimization.
Next, we develop a novel pruning technique reducing the overall point count while maintaining high quality, leading to smaller and more compact scene representations with faster inference speeds.
arXiv Detail & Related papers (2024-03-20T17:59:55Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - High-Quality Mesh Blendshape Generation from Face Videos via Neural Inverse Rendering [15.009484906668737]
We introduce a novel technique that reconstructs mesh-based blendshape rigs from single or sparse multi-view videos.
Experiments demonstrate that, with the flexible input of single or sparse multi-view videos, we reconstruct personalized high-fidelity blendshapes.
arXiv Detail & Related papers (2024-01-16T14:41:31Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Neural Lumigraph Rendering [33.676795978166375]
State-of-the-art (SOTA) neural volume rendering approaches are slow to train and require minutes of inference (i.e., rendering) time for high image resolutions.
We adopt high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images.
Our neural rendering pipeline accelerates SOTA neural volume rendering by about two orders of magnitude and our implicit surface representation is unique in allowing us to export a mesh with view-dependent texture information.
arXiv Detail & Related papers (2021-03-22T03:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.