Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes
- URL: http://arxiv.org/abs/2206.04942v1
- Date: Fri, 10 Jun 2022 08:32:57 GMT
- Title: Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes
- Authors: Ka-Hei Hui, Ruihui Li, Jingyu Hu, Chi-Wing Fu
- Abstract summary: This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
- Score: 52.038346313823524
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces a novel framework called DTNet for 3D mesh
reconstruction and generation via Disentangled Topology. Beyond previous works,
we learn a topology-aware neural template specific to each input then deform
the template to reconstruct a detailed mesh while preserving the learned
topology. One key insight is to decouple the complex mesh reconstruction into
two sub-tasks: topology formulation and shape deformation. Thanks to the
decoupling, DT-Net implicitly learns a disentangled representation for the
topology and shape in the latent space. Hence, it can enable novel disentangled
controls for supporting various shape generation applications, e.g., remix the
topologies of 3D objects, that are not achievable by previous reconstruction
works. Extensive experimental results demonstrate that our method is able to
produce high-quality meshes, particularly with diverse topologies, as compared
with the state-of-the-art methods.
Related papers
- DynoSurf: Neural Deformation-based Temporally Consistent Dynamic Surface Reconstruction [93.18586302123633]
This paper explores the problem of reconstructing temporally consistent surfaces from a 3D point cloud sequence without correspondence.
We propose DynoSurf, an unsupervised learning framework integrating a template surface representation with a learnable deformation field.
Experimental results demonstrate the significant superiority of DynoSurf over current state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T08:58:48Z) - GEM3D: GEnerative Medial Abstractions for 3D Shape Synthesis [25.594334301684903]
We introduce GEM3D -- a new deep, topology-aware generative model of 3D shapes.
Key ingredient of our method is a neural skeleton-based representation encoding information on both shape topology and geometry.
We demonstrate significantly more faithful surface reconstruction and diverse shape generation results compared to the state-of-the-art.
arXiv Detail & Related papers (2024-02-26T20:00:57Z) - DiViNeT: 3D Reconstruction from Disparate Views via Neural Template
Regularization [7.488962492863031]
We present a volume rendering-based neural surface reconstruction method that takes as few as three disparate RGB images as input.
Our key idea is to regularize the reconstruction, which is severely ill-posed and leaving significant gaps between the sparse views.
Our approach achieves the best reconstruction quality among existing methods in the presence of such sparse views.
arXiv Detail & Related papers (2023-06-07T18:05:14Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z) - STD-Net: Structure-preserving and Topology-adaptive Deformation Network
for 3D Reconstruction from a Single Image [27.885717341244014]
3D reconstruction from a single view image is a long-standing prob-lem in computer vision.
In this paper, we propose a novel methodcalled STD-Net to reconstruct the 3D models utilizing the mesh representation.
Experimental results on the images from ShapeNet show that ourproposed STD-Net has better performance than other state-of-the-art methods onreconstructing 3D objects.
arXiv Detail & Related papers (2020-03-07T11:02:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.