Unsupervised Occupancy Learning from Sparse Point Cloud
- URL: http://arxiv.org/abs/2404.02759v1
- Date: Wed, 3 Apr 2024 14:05:39 GMT
- Title: Unsupervised Occupancy Learning from Sparse Point Cloud
- Authors: Amine Ouasfi, Adnane Boukhayma,
- Abstract summary: Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities.
In this paper, we propose a method to infer occupancy fields instead of Neural Signed Distance Functions.
We highlight its capacity to improve implicit shape inference with respect to baselines and the state-of-the-art using synthetic and real data.
- Score: 8.732260277121547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities, encompassing a wide range from 3D shapes to images and audio. Within the realm of 3D shape representation, Neural Signed Distance Functions (SDF) have demonstrated remarkable potential in faithfully encoding intricate shape geometry. However, learning SDFs from 3D point clouds in the absence of ground truth supervision remains a very challenging task. In this paper, we propose a method to infer occupancy fields instead of SDFs as they are easier to learn from sparse inputs. We leverage a margin-based uncertainty measure to differentially sample from the decision boundary of the occupancy function and supervise the sampled boundary points using the input point cloud. We further stabilize the optimization process at the early stages of the training by biasing the occupancy function towards minimal entropy fields while maximizing its entropy at the input point cloud. Through extensive experiments and evaluations, we illustrate the efficacy of our proposed method, highlighting its capacity to improve implicit shape inference with respect to baselines and the state-of-the-art using synthetic and real data.
Related papers
- Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries [8.732260277121547]
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities.
Within the realm of 3D shape representation, Neural Signed Distance Functions (SDF) have demonstrated remarkable potential in faithfully encoding intricate shape geometry.
arXiv Detail & Related papers (2024-08-27T14:54:33Z) - Learning Unsigned Distance Fields from Local Shape Functions for 3D Surface Reconstruction [42.840655419509346]
This paper presents a novel neural framework, LoSF-UDF, for reconstructing surfaces from 3D point clouds by leveraging local shape functions to learn UDFs.
We observe that 3D shapes manifest simple patterns within localized areas, prompting us to create a training dataset of point cloud patches.
Our approach learns features within a specific radius around each query point and utilizes an attention mechanism to focus on the crucial features for UDF estimation.
arXiv Detail & Related papers (2024-07-01T14:39:03Z) - Towards Better Gradient Consistency for Neural Signed Distance Functions
via Level Set Alignment [50.892158511845466]
We show that gradient consistency in the field, indicated by the parallelism of level sets, is the key factor affecting the inference accuracy.
We propose a level set alignment loss to evaluate the parallelism of level sets, which can be minimized to achieve better gradient consistency.
arXiv Detail & Related papers (2023-05-19T11:28:05Z) - Unsupervised Inference of Signed Distance Functions from Single Sparse
Point Clouds without Learning Priors [54.966603013209685]
It is vital to infer signed distance functions (SDFs) from 3D point clouds.
We present a neural network to directly infer SDFs from single sparse point clouds.
arXiv Detail & Related papers (2023-03-25T15:56:50Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - Neural Poisson: Indicator Functions for Neural Fields [25.41908065938424]
Implicit neural field generating signed distance field representations (SDFs) of 3D shapes have shown remarkable progress.
We introduce a new paradigm for neural field representations of 3D scenes.
We show that our approach demonstrates state-of-the-art reconstruction performance on both synthetic and real scanned 3D scene data.
arXiv Detail & Related papers (2022-11-25T17:28:22Z) - CAP-UDF: Learning Unsigned Distance Functions Progressively from Raw Point Clouds with Consistency-Aware Field Optimization [54.69408516025872]
CAP-UDF is a novel method to learn consistency-aware UDF from raw point clouds.
We train a neural network to gradually infer the relationship between queries and the approximated surface.
We also introduce a polygonization algorithm to extract surfaces using the gradients of the learned UDF.
arXiv Detail & Related papers (2022-10-06T08:51:08Z) - 3PSDF: Three-Pole Signed Distance Function for Learning Surfaces with
Arbitrary Topologies [18.609959464825636]
We present a novel learnable implicit representation called the three-pole signed distance function (3PSDF)
It can represent non-watertight 3D shapes with arbitrary topologies while supporting easy field-to-mesh conversion.
We propose a dedicated learning framework to effectively learn 3PSDF without worrying about the vanishing gradient due to the null labels.
arXiv Detail & Related papers (2022-05-31T07:24:04Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.