LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions
- URL: http://arxiv.org/abs/2402.07301v1
- Date: Sun, 11 Feb 2024 20:42:49 GMT
- Title: LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions
- Authors: Atharva Pandey, Vishal Yadav, Rajendra Nagar, Santanu Chaudhury
- Abstract summary: Implicit 3D surface reconstruction of an object from its partial and noisy 3D point cloud scan is the classical geometry processing and 3D computer vision problem.
We propose a neural network architecture for learning the linear implicit shape representation of the 3D surface of an object.
The proposed approach achieves better Chamfer distance and comparable F-score than the state-of-the-art approach on the benchmark dataset.
- Score: 5.056545768004376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit 3D surface reconstruction of an object from its partial and noisy 3D
point cloud scan is the classical geometry processing and 3D computer vision
problem. In the literature, various 3D shape representations have been
developed, differing in memory efficiency and shape retrieval effectiveness,
such as volumetric, parametric, and implicit surfaces. Radial basis functions
provide memory-efficient parameterization of the implicit surface. However, we
show that training a neural network using the mean squared error between the
ground-truth implicit surface and the linear basis-based implicit surfaces does
not converge to the global solution. In this work, we propose locally supported
compact radial basis functions for a linear representation of the implicit
surface. This representation enables us to generate 3D shapes with arbitrary
topologies at any resolution due to their continuous nature. We then propose a
neural network architecture for learning the linear implicit shape
representation of the 3D surface of an object. We learn linear implicit shapes
within a supervised learning framework using ground truth Signed-Distance Field
(SDF) data for guidance. The classical strategies face difficulties in finding
linear implicit shapes from a given 3D point cloud due to numerical issues
(requires solving inverse of a large matrix) in basis and query point
selection. The proposed approach achieves better Chamfer distance and
comparable F-score than the state-of-the-art approach on the benchmark dataset.
We also show the effectiveness of the proposed approach by using it for the 3D
shape completion task.
Related papers
- Normal-guided Detail-Preserving Neural Implicit Functions for High-Fidelity 3D Surface Reconstruction [6.4279213810512665]
Current methods for learning neural implicit representations from RGB or RGBD images produce 3D surfaces with missing parts and details.
This paper demonstrates that training neural representations with first-order differential properties, i.e. surface normals, leads to highly accurate 3D surface reconstruction.
arXiv Detail & Related papers (2024-06-07T11:48:47Z) - 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - 3D Surface Reconstruction in the Wild by Deforming Shape Priors from
Synthetic Data [24.97027425606138]
Reconstructing the underlying 3D surface of an object from a single image is a challenging problem.
We present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image.
Our approach achieves state-of-the-art reconstruction performance across several real-world datasets.
arXiv Detail & Related papers (2023-02-24T20:37:27Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - DEF: Deep Estimation of Sharp Geometric Features in 3D Shapes [43.853000396885626]
We propose a learning-based framework for predicting sharp geometric features in sampled 3D shapes.
By fusing the result of individual patches, we can process large 3D models, which are impossible to process for existing data-driven methods.
arXiv Detail & Related papers (2020-11-30T18:21:00Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - Coupling Explicit and Implicit Surface Representations for Generative 3D
Modeling [41.79675639550555]
We propose a novel neural architecture for representing 3D surfaces, which harnesses two complementary shape representations.
We make these two representations synergistic by introducing novel consistency losses.
Our hybrid architecture outputs results are superior to the output of the two equivalent single-representation networks.
arXiv Detail & Related papers (2020-07-20T17:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.