Triangle Splatting for Real-Time Radiance Field Rendering
- URL: http://arxiv.org/abs/2505.19175v1
- Date: Sun, 25 May 2025 14:47:10 GMT
- Title: Triangle Splatting for Real-Time Radiance Field Rendering
- Authors: Jan Held, Renaud Vandeghen, Adrien Deliege, Abdullah Hamdi, Silvio Giancola, Anthony Cioppa, Andrea Vedaldi, Bernard Ghanem, Andrea Tagliasacchi, Marc Van Droogenbroeck,
- Abstract summary: We develop a differentiable that directly optimize triangles via end-to-end gradients.<n>Compared to popular 2D and 3D Gaussian Splatting methods, our approach achieves higher visual fidelity, faster convergence, and increased rendering throughput.<n>For the textitGarden scene, we achieve over 2,400 FPS at 1280x720 resolution using an off-the-shelf mesh.
- Score: 96.8143602720977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of computer graphics was revolutionized by models such as Neural Radiance Fields and 3D Gaussian Splatting, displacing triangles as the dominant representation for photogrammetry. In this paper, we argue for a triangle comeback. We develop a differentiable renderer that directly optimizes triangles via end-to-end gradients. We achieve this by rendering each triangle as differentiable splats, combining the efficiency of triangles with the adaptive density of representations based on independent primitives. Compared to popular 2D and 3D Gaussian Splatting methods, our approach achieves higher visual fidelity, faster convergence, and increased rendering throughput. On the Mip-NeRF360 dataset, our method outperforms concurrent non-volumetric primitives in visual fidelity and achieves higher perceptual quality than the state-of-the-art Zip-NeRF on indoor scenes. Triangles are simple, compatible with standard graphics stacks and GPU hardware, and highly efficient: for the \textit{Garden} scene, we achieve over 2,400 FPS at 1280x720 resolution using an off-the-shelf mesh renderer. These results highlight the efficiency and effectiveness of triangle-based representations for high-quality novel view synthesis. Triangles bring us closer to mesh-based optimization by combining classical computer graphics with modern differentiable rendering frameworks. The project page is https://trianglesplatting.github.io/
Related papers
- 2D Triangle Splatting for Direct Differentiable Mesh Training [4.161453036693641]
2D Triangle Splatting (2DTS) is a novel method that replaces 3D Gaussian primitives with 2D triangle facelets.<n>By incorporating a compactness parameter into the triangle primitives, we enable direct training of photorealistic meshes.<n>Our approach produces reconstructed meshes with superior visual quality compared to existing mesh reconstruction methods.
arXiv Detail & Related papers (2025-06-23T12:26:47Z) - Radiant Triangle Soup with Soft Connectivity Forces for 3D Reconstruction and Novel View Synthesis [5.688136904090347]
We introduce an inference-time optimization framework utilizing triangles to represent the geometry and appearance of the scene.<n>Compared to the current most-widely used primitives for 3D scene representation, namely Gaussian splats, triangles allow for more expressive color.<n>We formulate connectivity forces between triangles during optimization, encouraging explicit, but soft, surface continuity in 3D.
arXiv Detail & Related papers (2025-05-29T16:50:28Z) - GaussRender: Learning 3D Occupancy with Gaussian Rendering [86.89653628311565]
GaussRender is a module that improves 3D occupancy learning by enforcing projective consistency.<n>Our method penalizes 3D configurations that produce inconsistent 2D projections, thereby enforcing a more coherent 3D structure.
arXiv Detail & Related papers (2025-02-07T16:07:51Z) - 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.<n>3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.<n>Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - ODGS: 3D Scene Reconstruction from Omnidirectional Images with 3D Gaussian Splattings [48.72040500647568]
We present ODGS, a novelization pipeline for omnidirectional images, with geometric interpretation.
The entire pipeline is parallelized using, achieving optimization and speeds 100 times faster than NeRF-based methods.
Results show ODGS restores fine details effectively, even when reconstructing large 3D scenes.
arXiv Detail & Related papers (2024-10-28T02:45:13Z) - GSFusion: Online RGB-D Mapping Where Gaussian Splatting Meets TSDF Fusion [12.964675001994124]
Traditional fusion algorithms preserve the spatial structure of 3D scenes.
They often lack realism in terms of visualization.
GSFusion significantly enhances computational efficiency without sacrificing rendering quality.
arXiv Detail & Related papers (2024-08-22T18:32:50Z) - Hybrid Explicit Representation for Ultra-Realistic Head Avatars [55.829497543262214]
We introduce a novel approach to creating ultra-realistic head avatars and rendering them in real-time.<n> UV-mapped 3D mesh is utilized to capture sharp and rich textures on smooth surfaces, while 3D Gaussian Splatting is employed to represent complex geometric structures.<n>Experiments that our modeled results exceed those of state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering [6.142272540492937]
We present TRIPS (Trilinear Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP.
Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality.
This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage.
arXiv Detail & Related papers (2024-01-11T16:06:36Z) - VoGE: A Differentiable Volume Renderer using Gaussian Ellipsoids for
Analysis-by-Synthesis [62.47221232706105]
We propose VoGE, which utilizes the Gaussian reconstruction kernels as volumetric primitives.
To efficiently render via VoGE, we propose an approximate closeform solution for the volume density aggregation and a coarse-to-fine rendering strategy.
VoGE outperforms SoTA when applied to various vision tasks, e.g., object pose estimation, shape/texture fitting, and reasoning.
arXiv Detail & Related papers (2022-05-30T19:52:11Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.