Volumetric Surfaces: Representing Fuzzy Geometries with Multiple Meshes
- URL: http://arxiv.org/abs/2409.02482v1
- Date: Wed, 4 Sep 2024 07:18:26 GMT
- Title: Volumetric Surfaces: Representing Fuzzy Geometries with Multiple Meshes
- Authors: Stefano Esposito, Anpei Chen, Christian Reiser, Samuel Rota Bulò, Lorenzo Porzi, Katja Schwarz, Christian Richardt, Michael Zollhöfer, Peter Kontschieder, Andreas Geiger,
- Abstract summary: High-quality real-time view synthesis methods are based on volume rendering, splatting, or surface rendering.
We present a novel representation for real-time view where the number of sampling locations is small and bounded.
We show that our method can represent challenging fuzzy objects while achieving higher frame rates than volume-based and splatting-based methods on low-end and mobile devices.
- Score: 59.17785932398617
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: High-quality real-time view synthesis methods are based on volume rendering, splatting, or surface rendering. While surface-based methods generally are the fastest, they cannot faithfully model fuzzy geometry like hair. In turn, alpha-blending techniques excel at representing fuzzy materials but require an unbounded number of samples per ray (P1). Further overheads are induced by empty space skipping in volume rendering (P2) and sorting input primitives in splatting (P3). These problems are exacerbated on low-performance graphics hardware, e.g. on mobile devices. We present a novel representation for real-time view synthesis where the (P1) number of sampling locations is small and bounded, (P2) sampling locations are efficiently found via rasterization, and (P3) rendering is sorting-free. We achieve this by representing objects as semi-transparent multi-layer meshes, rendered in fixed layer order from outermost to innermost. We model mesh layers as SDF shells with optimal spacing learned during training. After baking, we fit UV textures to the corresponding meshes. We show that our method can represent challenging fuzzy objects while achieving higher frame rates than volume-based and splatting-based methods on low-end and mobile devices.
Related papers
- Subsurface Scattering for 3D Gaussian Splatting [10.990813043493642]
3D reconstruction and relighting of objects made from scattering materials present a significant challenge due to the complex light transport beneath the surface.
We propose a framework for optimizing an object's shape together with the radiance transfer field given multi-view OLAT (one light at a time) data.
Our approach enables material editing, relighting and novel view synthesis at interactive rates.
arXiv Detail & Related papers (2024-08-22T10:34:01Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - RayDF: Neural Ray-surface Distance Fields with Multi-view Consistency [10.55497978011315]
We propose a new framework called RayDF to formulate 3D shapes as ray-based neural functions.
Our method achieves a 1000x faster speed than coordinate-based methods to render an 800x800 depth image.
arXiv Detail & Related papers (2023-10-30T15:22:50Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Generative Occupancy Fields for 3D Surface-Aware Image Synthesis [123.11969582055382]
Generative Occupancy Fields (GOF) is a novel model based on generative radiance fields.
GOF can synthesize high-quality images with 3D consistency and simultaneously learn compact and smooth object surfaces.
arXiv Detail & Related papers (2021-11-01T14:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.