Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis
- URL: http://arxiv.org/abs/2111.04276v1
- Date: Mon, 8 Nov 2021 05:29:35 GMT
- Title: Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis
- Authors: Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, Sanja Fidler
- Abstract summary: DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
- Score: 90.26556260531707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce DMTet, a deep 3D conditional generative model that can
synthesize high-resolution 3D shapes using simple user guides such as coarse
voxels. It marries the merits of implicit and explicit 3D representations by
leveraging a novel hybrid 3D representation. Compared to the current implicit
approaches, which are trained to regress the signed distance values, DMTet
directly optimizes for the reconstructed surface, which enables us to
synthesize finer geometric details with fewer artifacts. Unlike deep 3D
generative models that directly generate explicit representations such as
meshes, our model can synthesize shapes with arbitrary topology. The core of
DMTet includes a deformable tetrahedral grid that encodes a discretized signed
distance function and a differentiable marching tetrahedra layer that converts
the implicit signed distance representation to the explicit surface mesh
representation. This combination allows joint optimization of the surface
geometry and topology as well as generation of the hierarchy of subdivisions
using reconstruction and adversarial losses defined explicitly on the surface
mesh. Our approach significantly outperforms existing work on conditional shape
synthesis from coarse voxel inputs, trained on a dataset of complex 3D animal
shapes. Project page: https://nv-tlabs.github.io/DMTet/.
Related papers
- DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation [10.250715657201363]
We introduce DreamMesh4D, a novel framework combining mesh representation with geometric skinning technique to generate high-quality 4D object from a monocular video.
Our method is compatible with modern graphic pipelines, showcasing its potential in the 3D gaming and film industry.
arXiv Detail & Related papers (2024-10-09T10:41:08Z) - MeshXL: Neural Coordinate Field for Generative 3D Foundation Models [51.1972329762843]
We present a family of generative pre-trained auto-regressive models, which addresses the process of 3D mesh generation with modern large language model approaches.
MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications.
arXiv Detail & Related papers (2024-05-31T14:35:35Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - SurfGen: Adversarial 3D Shape Synthesis with Explicit Surface
Discriminators [5.575197901329888]
We present a 3D shape synthesis framework (SurfGen) that directly applies adversarial training to the object surface.
Our approach uses a differentiable spherical projection layer to capture and represent the explicit zero isosurface of an implicit 3D generator as functions defined on the unit sphere.
We evaluate our model on large-scale shape datasets, and demonstrate that the end-to-end trained model is capable of generating high fidelity 3D shapes with diverse topology.
arXiv Detail & Related papers (2022-01-01T04:44:42Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z) - Coupling Explicit and Implicit Surface Representations for Generative 3D
Modeling [41.79675639550555]
We propose a novel neural architecture for representing 3D surfaces, which harnesses two complementary shape representations.
We make these two representations synergistic by introducing novel consistency losses.
Our hybrid architecture outputs results are superior to the output of the two equivalent single-representation networks.
arXiv Detail & Related papers (2020-07-20T17:24:51Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.