Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis
- URL: http://arxiv.org/abs/2402.12377v1
- Date: Mon, 19 Feb 2024 18:59:41 GMT
- Title: Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis
- Authors: Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin,
Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas
Geiger
- Abstract summary: We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
- Score: 70.40950409274312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While surface-based view synthesis algorithms are appealing due to their low
computational requirements, they often struggle to reproduce thin structures.
In contrast, more expensive methods that model the scene's geometry as a
volumetric density field (e.g. NeRF) excel at reconstructing fine geometric
detail. However, density fields often represent geometry in a "fuzzy" manner,
which hinders exact localization of the surface. In this work, we modify
density fields to encourage them to converge towards surfaces, without
compromising their ability to reconstruct thin structures. First, we employ a
discrete opacity grid representation instead of a continuous density field,
which allows opacity values to discontinuously transition from zero to one at
the surface. Second, we anti-alias by casting multiple rays per pixel, which
allows occlusion boundaries and subpixel structures to be modelled without
using semi-transparent voxels. Third, we minimize the binary entropy of the
opacity values, which facilitates the extraction of surface geometry by
encouraging opacity values to binarize towards the end of training. Lastly, we
develop a fusion-based meshing strategy followed by mesh simplification and
appearance model fitting. The compact meshes produced by our model can be
rendered in real-time on mobile devices and achieve significantly higher view
synthesis quality compared to existing mesh-based approaches.
Related papers
- Volumetric Surfaces: Representing Fuzzy Geometries with Multiple Meshes [59.17785932398617]
High-quality real-time view synthesis methods are based on volume rendering, splatting, or surface rendering.
We present a novel representation for real-time view where the number of sampling locations is small and bounded.
We show that our method can represent challenging fuzzy objects while achieving higher frame rates than volume-based and splatting-based methods on low-end and mobile devices.
arXiv Detail & Related papers (2024-09-04T07:18:26Z) - PRS: Sharp Feature Priors for Resolution-Free Surface Remeshing [30.28380889862059]
We present a data-driven approach for automatic feature detection and remeshing.
Our algorithm improves over state-of-the-art by 26% normals F-score and 42% perceptual $textRMSE_textv$.
arXiv Detail & Related papers (2023-11-30T12:15:45Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Neural Volumetric Mesh Generator [40.224769507878904]
We propose Neural Volumetric Mesh Generator(NVMG) which can generate novel and high-quality volumetric meshes.
Our pipeline can generate high-quality artifact-free volumetric and surface meshes from random noise or a reference image without any post-processing.
arXiv Detail & Related papers (2022-10-06T18:46:51Z) - Representing 3D Shapes with Probabilistic Directed Distance Fields [7.528141488548544]
We develop a novel shape representation that allows fast differentiable rendering within an implicit architecture.
We show how to model inherent discontinuities in the underlying field.
We also apply our method to fitting single shapes, unpaired 3D-aware generative image modelling, and single-image 3D reconstruction tasks.
arXiv Detail & Related papers (2021-12-10T02:15:47Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Volume Rendering of Neural Implicit Surfaces [57.802056954935495]
This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
arXiv Detail & Related papers (2021-06-22T20:23:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.