Radiant Triangle Soup with Soft Connectivity Forces for 3D Reconstruction and Novel View Synthesis
- URL: http://arxiv.org/abs/2505.23642v1
- Date: Thu, 29 May 2025 16:50:28 GMT
- Title: Radiant Triangle Soup with Soft Connectivity Forces for 3D Reconstruction and Novel View Synthesis
- Authors: Nathaniel Burgdorfer, Philippos Mordohai,
- Abstract summary: We introduce an inference-time optimization framework utilizing triangles to represent the geometry and appearance of the scene.<n>Compared to the current most-widely used primitives for 3D scene representation, namely Gaussian splats, triangles allow for more expressive color.<n>We formulate connectivity forces between triangles during optimization, encouraging explicit, but soft, surface continuity in 3D.
- Score: 5.688136904090347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce an inference-time optimization framework utilizing triangles to represent the geometry and appearance of the scene. More specifically, we develop a scene optimization algorithm for triangle soup, a collection of disconnected semi-transparent triangle primitives. Compared to the current most-widely used primitives for 3D scene representation, namely Gaussian splats, triangles allow for more expressive color interpolation, and benefit from a large algorithmic infrastructure for downstream tasks. Triangles, unlike full-rank Gaussian kernels, naturally combine to form surfaces. We formulate connectivity forces between triangles during optimization, encouraging explicit, but soft, surface continuity in 3D. We perform experiments on a representative 3D reconstruction dataset and show competitive photometric and geometric results.
Related papers
- 2D Triangle Splatting for Direct Differentiable Mesh Training [4.161453036693641]
2D Triangle Splatting (2DTS) is a novel method that replaces 3D Gaussian primitives with 2D triangle facelets.<n>By incorporating a compactness parameter into the triangle primitives, we enable direct training of photorealistic meshes.<n>Our approach produces reconstructed meshes with superior visual quality compared to existing mesh reconstruction methods.
arXiv Detail & Related papers (2025-06-23T12:26:47Z) - Triangle Splatting for Real-Time Radiance Field Rendering [96.8143602720977]
We develop a differentiable that directly optimize triangles via end-to-end gradients.<n>Compared to popular 2D and 3D Gaussian Splatting methods, our approach achieves higher visual fidelity, faster convergence, and increased rendering throughput.<n>For the textitGarden scene, we achieve over 2,400 FPS at 1280x720 resolution using an off-the-shelf mesh.
arXiv Detail & Related papers (2025-05-25T14:47:10Z) - BG-Triangle: Bézier Gaussian Triangle for 3D Vectorization and Rendering [60.240908644910874]
Differentiable rendering enables efficient optimization by allowing gradients to be computed through the rendering process.<n>Existing solutions approximate or re-formulate traditional rendering operations using smooth, probabilistic proxies.<n>We present a novel hybrid representation that combines B'ezier triangle-based vector graphics primitives with Gaussian-based probabilistic models.
arXiv Detail & Related papers (2025-03-18T06:53:52Z) - Geometry Field Splatting with Gaussian Surfels [23.412129038089326]
We leverage the geometry field proposed in recent work for opaque surfaces, which can then be converted to volume densities.<n>We adapt Gaussian kernels or surfels to the geometry field rather than the volume, enabling precise reconstruction of opaque solids.<n>We demonstrate significant improvement in the quality of reconstructed 3D surfaces on widely-used datasets.
arXiv Detail & Related papers (2024-11-26T03:07:05Z) - 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.<n>3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.<n>Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and adaptive surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - Oriented-grid Encoder for 3D Implicit Representations [10.02138130221506]
This paper is the first to exploit 3D characteristics in 3D geometric encoders explicitly.
Our method gets state-of-the-art results when compared to the prior techniques.
arXiv Detail & Related papers (2024-02-09T19:28:13Z) - A Scalable Combinatorial Solver for Elastic Geometrically Consistent 3D
Shape Matching [69.14632473279651]
We present a scalable algorithm for globally optimizing over the space of geometrically consistent mappings between 3D shapes.
We propose a novel primal coupled with a Lagrange dual problem that is several orders of magnitudes faster than previous solvers.
arXiv Detail & Related papers (2022-04-27T09:47:47Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Differentiable Surface Triangulation [40.13834693745158]
We present a differentiable surface triangulation that enables optimization for any per-vertex or per-face differentiable objective function over the space of underlying surface triangulations.
Our method builds on the result that any 2D triangulation can be achieved by a suitably weighted Delaunay triangulation.
We extend the algorithm to 3D by decomposing shapes into developable sets and differentiably meshing each set with suitable boundary constraints.
arXiv Detail & Related papers (2021-09-22T12:42:43Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.