Learning Delaunay Surface Elements for Mesh Reconstruction
- URL: http://arxiv.org/abs/2012.01203v2
- Date: Thu, 6 May 2021 17:17:14 GMT
- Title: Learning Delaunay Surface Elements for Mesh Reconstruction
- Authors: Marie-Julie Rakotosaona, Paul Guerrero, Noam Aigerman, Niloy Mitra,
Maks Ovsjanikov
- Abstract summary: We present a method for reconstructing triangle meshes from point clouds.
We leverage the properties of 2D Delaunay triangulations to construct a mesh from manifold surface elements.
- Score: 40.13834693745158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method for reconstructing triangle meshes from point clouds.
Existing learning-based methods for mesh reconstruction mostly generate
triangles individually, making it hard to create manifold meshes. We leverage
the properties of 2D Delaunay triangulations to construct a mesh from manifold
surface elements. Our method first estimates local geodesic neighborhoods
around each point. We then perform a 2D projection of these neighborhoods using
a learned logarithmic map. A Delaunay triangulation in this 2D domain is
guaranteed to produce a manifold patch, which we call a Delaunay surface
element. We synchronize the local 2D projections of neighboring elements to
maximize the manifoldness of the reconstructed mesh. Our results show that we
achieve better overall manifoldness of our reconstructed meshes than current
methods to reconstruct meshes with arbitrary topology. Our code, data and
pretrained models can be found online:
https://github.com/mrakotosaon/dse-meshing
Related papers
- CircNet: Meshing 3D Point Clouds with Circumcenter Detection [67.23307214942696]
Reconstructing 3D point clouds into triangle meshes is a key problem in computational geometry and surface reconstruction.
We introduce a deep neural network that detects the circumcenters to achieve point cloud triangulation.
We validate our method on prominent datasets of both watertight and open surfaces.
arXiv Detail & Related papers (2023-01-23T03:32:57Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - Laplacian2Mesh: Laplacian-Based Mesh Understanding [4.808061174740482]
We introduce a novel and flexible convolutional neural network (CNN) model, called Laplacian2Mesh, for 3D triangle mesh.
Mesh pooling is applied to expand the receptive field of the network by the multi-space transformation of Laplacian.
Experiments on various learning tasks applied to 3D meshes demonstrate the effectiveness and efficiency of Laplacian2Mesh.
arXiv Detail & Related papers (2022-02-01T10:10:13Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes [28.571946680616765]
We propose a family of deep hierarchical convolutional networks over 3D geometric data.
The first type, geodesic convolutions, defines the kernel weights over mesh surfaces or graphs.
The second type, Euclidean convolutions, is independent of any underlying mesh structure.
arXiv Detail & Related papers (2020-04-02T13:52:00Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.