MergeNet: Explicit Mesh Reconstruction from Sparse Point Clouds via Edge Prediction
- URL: http://arxiv.org/abs/2407.11610v1
- Date: Tue, 16 Jul 2024 11:19:16 GMT
- Title: MergeNet: Explicit Mesh Reconstruction from Sparse Point Clouds via Edge Prediction
- Authors: Weimin Wang, Yingxu Deng, Zezeng Li, Yu Liu, Na Lei,
- Abstract summary: Existing implicit methods produce superior smooth and watertight meshes.
Explicit methods are more efficient by directly forming the face from points.
We propose MEshMerge Reconstruction via edGE(Net), which converts mesh reconstruction into local connectivity prediction problems.
- Score: 11.280646720745729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a novel method for reconstructing meshes from sparse point clouds by predicting edge connection. Existing implicit methods usually produce superior smooth and watertight meshes due to the isosurface extraction algorithms~(e.g., Marching Cubes). However, these methods become memory and computationally intensive with increasing resolution. Explicit methods are more efficient by directly forming the face from points. Nevertheless, the challenge of selecting appropriate faces from enormous candidates often leads to undesirable faces and holes. Moreover, the reconstruction performance of both approaches tends to degrade when the point cloud gets sparse. To this end, we propose MEsh Reconstruction via edGE~(MergeNet), which converts mesh reconstruction into local connectivity prediction problems. Specifically, MergeNet learns to extract the features of candidate edges and regress their distances to the underlying surface. Consequently, the predicted distance is utilized to filter out edges that lay on surfaces. Finally, the meshes are reconstructed by refining the triangulations formed by these edges. Extensive experiments on synthetic and real-scanned datasets demonstrate the superiority of MergeNet to SoTA explicit methods.
Related papers
- Arbitrary-Scale Point Cloud Upsampling by Voxel-Based Network with
Latent Geometric-Consistent Learning [52.825441454264585]
We propose an arbitrary-scale Point cloud Upsampling framework using Voxel-based Network (textbfPU-VoxelNet)
Thanks to the completeness and regularity inherited from the voxel representation, voxel-based networks are capable of providing predefined grid space to approximate 3D surface.
A density-guided grid resampling method is developed to generate high-fidelity points while effectively avoiding sampling outliers.
arXiv Detail & Related papers (2024-03-08T07:31:14Z) - CircNet: Meshing 3D Point Clouds with Circumcenter Detection [67.23307214942696]
Reconstructing 3D point clouds into triangle meshes is a key problem in computational geometry and surface reconstruction.
We introduce a deep neural network that detects the circumcenters to achieve point cloud triangulation.
We validate our method on prominent datasets of both watertight and open surfaces.
arXiv Detail & Related papers (2023-01-23T03:32:57Z) - Edge Preserving Implicit Surface Representation of Point Clouds [27.632399836710164]
We propose a novel edge-preserving implicit surface reconstruction method, which mainly consists of a differentiable Laplican regularizer and a dynamic edge sampling strategy.
Compared with the state-of-the-art methods, experimental results show that our method significantly improves the quality of 3D reconstruction results.
arXiv Detail & Related papers (2023-01-12T08:04:47Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance [30.863194319818223]
We propose to leverage the input point cloud as much as possible, by only adding connectivity information to existing points.
Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics.
We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories.
arXiv Detail & Related papers (2020-07-17T22:36:00Z) - Point2Mesh: A Self-Prior for Deformable Meshes [83.31236364265403]
We introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud.
The self-prior encapsulates reoccurring geometric repetitions from a single shape within the weights of a deep neural network.
We show that Point2Mesh converges to a desirable solution; compared to a prescribed smoothness prior, which often becomes trapped in undesirable local minima.
arXiv Detail & Related papers (2020-05-22T10:01:04Z) - Learning Nonparametric Human Mesh Reconstruction from a Single Image
without Ground Truth Meshes [56.27436157101251]
We propose a novel approach to learn human mesh reconstruction without any ground truth meshes.
This is made possible by introducing two new terms into the loss function of a graph convolutional neural network (Graph CNN)
arXiv Detail & Related papers (2020-02-28T20:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.