Mixing-Denoising Generalizable Occupancy Networks
- URL: http://arxiv.org/abs/2311.12125v1
- Date: Mon, 20 Nov 2023 19:05:57 GMT
- Title: Mixing-Denoising Generalizable Occupancy Networks
- Authors: Amine Ouasfi and Adnane Boukhayma
- Abstract summary: Current state-of-the-art implicit neural shape models rely on the inductive bias of convolutions.
We relax the intrinsic model bias and constrain the hypothesis space instead with an auxiliary regularization related to the reconstruction task.
The resulting model is the first only-MLP locally conditioned reconstruction from point cloud network.
- Score: 10.316008740970037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While current state-of-the-art generalizable implicit neural shape models
rely on the inductive bias of convolutions, it is still not entirely clear how
properties emerging from such biases are compatible with the task of 3D
reconstruction from point cloud. We explore an alternative approach to
generalizability in this context. We relax the intrinsic model bias (i.e. using
MLPs to encode local features as opposed to convolutions) and constrain the
hypothesis space instead with an auxiliary regularization related to the
reconstruction task, i.e. denoising. The resulting model is the first only-MLP
locally conditioned implicit shape reconstruction from point cloud network with
fast feed forward inference. Point cloud borne features and denoising offsets
are predicted from an exclusively MLP-made network in a single forward pass. A
decoder predicts occupancy probabilities for queries anywhere in space by
pooling nearby features from the point cloud order-invariantly, guided by
denoised relative positional encoding. We outperform the state-of-the-art
convolutional method while using half the number of model parameters.
Related papers
- IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images [50.4538089115248]
Generalizable 3D object reconstruction from single-view RGB-D images remains a challenging task.
We propose a novel approach, IPoD, which harmonizes implicit field learning with point diffusion.
Experiments conducted on the CO3D-v2 dataset affirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6% in Chamfer distance over existing methods.
arXiv Detail & Related papers (2024-03-30T07:17:37Z) - Generalizing Neural Human Fitting to Unseen Poses With Articulated SE(3)
Equivariance [48.39751410262664]
ArtEq is a part-based SE(3)-equivariant neural architecture for SMPL model estimation from point clouds.
Experimental results show that ArtEq generalizes to poses not seen during training, outperforming state-of-the-art methods by 44% in terms of body reconstruction accuracy.
arXiv Detail & Related papers (2023-04-20T17:58:26Z) - Unsupervised Deep Probabilistic Approach for Partial Point Cloud
Registration [74.53755415380171]
Deep point cloud registration methods face challenges to partial overlaps and rely on labeled data.
We propose UDPReg, an unsupervised deep probabilistic registration framework for point clouds with partial overlaps.
Our UDPReg achieves competitive performance on the 3DMatch/3DLoMatch and ModelNet/ModelLoNet benchmarks.
arXiv Detail & Related papers (2023-03-23T14:18:06Z) - SE(3)-Equivariant Attention Networks for Shape Reconstruction in
Function Space [50.14426188851305]
We propose the first SE(3)-equivariant coordinate-based network for learning occupancy fields from point clouds.
In contrast to previous shape reconstruction methods that align the input to a regular grid, we operate directly on the irregular, unoriented point cloud.
We show that our method outperforms previous SO(3)-equivariant methods, as well as non-equivariant methods trained on SO(3)-augmented datasets.
arXiv Detail & Related papers (2022-04-05T17:59:15Z) - Deep Point Cloud Normal Estimation via Triplet Learning [12.271669779096076]
We propose a novel normal estimation method for point clouds.
It consists of two phases: (a) feature encoding which learns representations of local patches, and (b) normal estimation that takes the learned representation as input and regresses the normal vector.
Our method preserves sharp features and achieves better normal estimation results on CAD-like shapes.
arXiv Detail & Related papers (2021-10-20T11:16:00Z) - Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds [30.401088478228235]
We present an unsupervised approach to reconstruct human shape and pose from noisy point cloud.
Our network is trained from scratch with no need to warm-up the network with supervised data.
arXiv Detail & Related papers (2021-07-15T18:07:47Z) - OMNet: Learning Overlapping Mask for Partial-to-Partial Point Cloud
Registration [31.108056345511976]
OMNet is a global feature based iterative network for partial-to-partial point cloud registration.
We learn masks in a coarse-to-fine manner to reject non-overlapping regions, which converting the partial-to-partial registration to the registration of the same shapes.
arXiv Detail & Related papers (2021-03-01T11:59:59Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Point2Mesh: A Self-Prior for Deformable Meshes [83.31236364265403]
We introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud.
The self-prior encapsulates reoccurring geometric repetitions from a single shape within the weights of a deep neural network.
We show that Point2Mesh converges to a desirable solution; compared to a prescribed smoothness prior, which often becomes trapped in undesirable local minima.
arXiv Detail & Related papers (2020-05-22T10:01:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.