Radiance Meshes for Volumetric Reconstruction
- URL: http://arxiv.org/abs/2512.04076v1
- Date: Wed, 03 Dec 2025 18:57:03 GMT
- Title: Radiance Meshes for Volumetric Reconstruction
- Authors: Alexander Mai, Trevor Hedstrom, George Kopanas, Janne Kontkanen, Falko Kuester, Jonathan T. Barron,
- Abstract summary: We introduce radiance meshes, a technique for representing radiance fields with constant density tetrahedral cells.<n>Our model is able to perform exact and fast volume rendering using both synthesisization and ray-tracing.<n>Our rendering method exactly evaluates the volume equation and enables high quality, real-time view on standard consumer hardware.
- Score: 56.51690637804858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce radiance meshes, a technique for representing radiance fields with constant density tetrahedral cells produced with a Delaunay tetrahedralization. Unlike a Voronoi diagram, a Delaunay tetrahedralization yields simple triangles that are natively supported by existing hardware. As such, our model is able to perform exact and fast volume rendering using both rasterization and ray-tracing. We introduce a new rasterization method that achieves faster rendering speeds than all prior radiance field representations (assuming an equivalent number of primitives and resolution) across a variety of platforms. Optimizing the positions of Delaunay vertices introduces topological discontinuities (edge flips). To solve this, we use a Zip-NeRF-style backbone which allows us to express a smoothly varying field even when the topology changes. Our rendering method exactly evaluates the volume rendering equation and enables high quality, real-time view synthesis on standard consumer hardware. Our tetrahedral meshes also lend themselves to a variety of exciting applications including fisheye lens distortion, physics-based simulation, editing, and mesh extraction.
Related papers
- UTrice: Unifying Primitives in Differentiable Ray Tracing and Rasterization via Triangles for Particle-Based 3D Scenes [1.633289883726582]
Ray tracing 3D Gaussian particles enables realistic effects such as depth of field refractions, and flexible camera modeling for novel-view rendering.<n>Existing methods trace Gaussians through triangle geometry, which requires constructing complex intermediate meshes and performing costly tests.<n>We propose a differentiable triangle-based ray tracing pipeline that treats triangles as rendering primitives without relying on any proxy geometry.
arXiv Detail & Related papers (2025-12-04T03:33:10Z) - LinPrim: Linear Primitives for Differentiable Volumetric Rendering [51.56484100374058]
We introduce two new scene representations based on linear primitives.<n>We present a different octaiableizer that runs efficiently on GPU.<n>We demonstrate comparable performance to state-of-the-art methods.
arXiv Detail & Related papers (2025-01-27T18:49:38Z) - Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering [37.48219196092378]
We propose an efficient radiance field rendering algorithm that incorporates a synthesis process on adaptive sparse voxels without neural networks or 3D Gaussians.<n>Our method improves the previous neural-free voxel model by over 4db PSNR and more than 10x FPS speedup.<n>Our voxel representation is seamlessly compatible with grid-based 3D processing techniques such as Volume Fusion, Voxel Pooling, and Marching Cubes.
arXiv Detail & Related papers (2024-12-05T18:59:11Z) - Triplet: Triangle Patchlet for Mesh-Based Inverse Rendering and Scene Parameters Approximation [0.0]
inverse rendering seeks to derive the physical properties of a scene, including light, geometry, textures, and materials.
Meshes, as a traditional representation adopted by many simulation pipeline, still show limited influence in radiance field for inverse rendering.
This paper introduces a novel framework called Triangle Patchlet (abbr. Triplet), a mesh-based representation, to comprehensively approximate these parameters.
arXiv Detail & Related papers (2024-10-16T09:59:11Z) - Volumetric Surfaces: Representing Fuzzy Geometries with Layered Meshes [59.17785932398617]
High-quality view synthesis relies on volume rendering, splatting, or surface rendering.<n>We present a novel representation for real-time view synthesis where the number of sampling locations is small and bounded.<n>We achieve this by representing objects as semi-transparent multi-layer meshes rendered in a fixed order.
arXiv Detail & Related papers (2024-09-04T07:18:26Z) - Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - NeuManifold: Neural Watertight Manifold Reconstruction with Efficient and High-Quality Rendering Support [43.5015470997138]
We present a method for generating high-quality watertight manifold meshes from multi-view input images.<n>Our method combines the benefits of both worlds; we take the geometry obtained from neural fields, and further optimize the geometry as well as a compact neural texture representation.
arXiv Detail & Related papers (2023-05-26T17:59:21Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.