OffsetOPT: Explicit Surface Reconstruction without Normals
- URL: http://arxiv.org/abs/2503.15763v1
- Date: Thu, 20 Mar 2025 00:47:27 GMT
- Title: OffsetOPT: Explicit Surface Reconstruction without Normals
- Authors: Huan Lei,
- Abstract summary: We propose OffsetOPT, a method that reconstructs explicit surfaces directly from 3D point clouds.<n>We demonstrate its accuracy on popular benchmarks, including small-scale shapes and large-scale open surfaces.
- Score: 7.297352404640493
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural surface reconstruction has been dominated by implicit representations with marching cubes for explicit surface extraction. However, those methods typically require high-quality normals for accurate reconstruction. We propose OffsetOPT, a method that reconstructs explicit surfaces directly from 3D point clouds and eliminates the need for point normals. The approach comprises two stages: first, we train a neural network to predict surface triangles based on local point geometry, given uniformly distributed training point clouds. Next, we apply the frozen network to reconstruct surfaces from unseen point clouds by optimizing a per-point offset to maximize the accuracy of triangle predictions. Compared to state-of-the-art methods, OffsetOPT not only excels at reconstructing overall surfaces but also significantly preserves sharp surface features. We demonstrate its accuracy on popular benchmarks, including small-scale shapes and large-scale open surfaces.
Related papers
- ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction [50.07671826433922]
It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.
We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.
Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
arXiv Detail & Related papers (2024-08-22T17:59:01Z) - High-quality Surface Reconstruction using Gaussian Surfels [18.51978059665113]
We propose a novel point-based representation, Gaussian surfels, to combine the advantages of the flexible optimization procedure in 3D Gaussian points.
This is achieved by setting the z-scale of 3D Gaussian points to 0, effectively flattening the original 3D ellipsoid into a 2D ellipse.
By treating the local z-axis as the normal direction, it greatly improves optimization stability and surface alignment.
arXiv Detail & Related papers (2024-04-27T04:13:39Z) - NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance [48.72360034876566]
We propose a neural implicit surface reconstruction pipeline with guidance from 3D Gaussian Splatting to recover highly detailed surfaces.<n>The advantage of 3D Gaussian Splatting is that it can generate dense point clouds with detailed structure.<n>We introduce a scale regularizer to pull the centers close to the surface by enforcing the 3D Gaussians to be extremely thin.
arXiv Detail & Related papers (2023-12-01T07:04:47Z) - PRS: Sharp Feature Priors for Resolution-Free Surface Remeshing [30.28380889862059]
We present a data-driven approach for automatic feature detection and remeshing.
Our algorithm improves over state-of-the-art by 26% normals F-score and 42% perceptual $textRMSE_textv$.
arXiv Detail & Related papers (2023-11-30T12:15:45Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Point Cloud Upsampling and Normal Estimation using Deep Learning for
Robust Surface Reconstruction [2.821829060100186]
We present a novel deep learning architecture for point cloud upsampling.
A noisy point cloud of low density with corresponding point normals is used to estimate a point cloud with higher density and appendant point normals.
arXiv Detail & Related papers (2021-02-26T10:58:26Z) - DeepFit: 3D Surface Fitting via Neural Network Weighted Least Squares [43.24287146191367]
We propose a surface fitting method for unstructured 3D point clouds.
This method, called DeepFit, incorporates a neural network to learn point-wise weights for weighted least squares surface fitting.
arXiv Detail & Related papers (2020-03-23T09:18:54Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.