TetSphere Splatting: Representing High-Quality Geometry with Lagrangian Volumetric Meshes
- URL: http://arxiv.org/abs/2405.20283v2
- Date: Mon, 17 Jun 2024 16:51:15 GMT
- Title: TetSphere Splatting: Representing High-Quality Geometry with Lagrangian Volumetric Meshes
- Authors: Minghao Guo, Bohan Wang, Kaiming He, Wojciech Matusik,
- Abstract summary: TetSphere splatting is an explicit, Lagrangian representation for reconstructing 3D shapes with high-quality geometry.
It deforms multiple initial tetrahedral spheres to accurately reconstruct the 3D shape.
It seamlessly integrates into diverse applications, including single-view 3D reconstruction, image-/text-to-3D content generation.
- Score: 47.47768820192874
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We present TetSphere splatting, an explicit, Lagrangian representation for reconstructing 3D shapes with high-quality geometry. In contrast to conventional object reconstruction methods which predominantly use Eulerian representations, including both neural implicit (e.g., NeRF, NeuS) and explicit representations (e.g., DMTet), and often struggle with high computational demands and suboptimal mesh quality, TetSphere splatting utilizes an underused but highly effective geometric primitive -- tetrahedral meshes. This approach directly yields superior mesh quality without relying on neural networks or post-processing. It deforms multiple initial tetrahedral spheres to accurately reconstruct the 3D shape through a combination of differentiable rendering and geometric energy optimization, resulting in significant computational efficiency. Serving as a robust and versatile geometry representation, Tet-Sphere splatting seamlessly integrates into diverse applications, including single-view 3D reconstruction, image-/text-to-3D content generation. Experimental results demonstrate that TetSphere splatting outperforms existing representations, delivering faster optimization speed, enhanced mesh quality, and reliable preservation of thin structures.
Related papers
- GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - Tetrahedron Splatting for 3D Generation [39.24591650300784]
Tetrahedron Splatting (TeT-Splatting) supports easy convergence during optimization, precise mesh extraction, and real-time rendering simultaneously.
Our representation can be trained without mesh extraction, making the optimization process easier to converge.
arXiv Detail & Related papers (2024-06-03T17:56:36Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z) - Flexible Isosurface Extraction for Gradient-Based Mesh Optimization [65.76362454554754]
This work considers gradient-based mesh optimization, where we iteratively optimize for a 3D surface mesh by representing it as the isosurface of a scalar field.
We introduce FlexiCubes, an isosurface representation specifically designed for optimizing an unknown mesh with respect to geometric, visual, or even physical objectives.
arXiv Detail & Related papers (2023-08-10T06:40:19Z) - HR-NeuS: Recovering High-Frequency Surface Geometry via Neural Implicit
Surfaces [6.382138631957651]
We present High-Resolution NeuS, a novel neural implicit surface reconstruction method.
HR-NeuS recovers high-frequency surface geometry while maintaining large-scale reconstruction accuracy.
We demonstrate through experiments on DTU and BlendedMVS datasets that our approach produces 3D geometries that are qualitatively more detailed and quantitatively of similar accuracy compared to previous approaches.
arXiv Detail & Related papers (2023-02-14T02:25:16Z) - Learning Neural Implicit Representations with Surface Signal
Parameterizations [14.835882967340968]
We present a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data.
Our model remains compatible with existing mesh-based digital content with appearance data.
arXiv Detail & Related papers (2022-11-01T15:10:58Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.