Hyperspherical Embedding for Point Cloud Completion
- URL: http://arxiv.org/abs/2307.05634v1
- Date: Tue, 11 Jul 2023 08:18:37 GMT
- Title: Hyperspherical Embedding for Point Cloud Completion
- Authors: Junming Zhang, Haomeng Zhang, Ram Vasudevan, Matthew Johnson-Roberson
- Abstract summary: This paper proposes a hyperspherical module, which transforms and normalizes embeddings from the encoder to be on a unit hypersphere.
We theoretically analyze the hyperspherical embedding and show that it enables more stable training with a wider range of learning rates and more compact embedding distributions.
Experiment results show consistent improvement of point cloud completion in both single-task and multi-task learning.
- Score: 25.41194214006682
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most real-world 3D measurements from depth sensors are incomplete, and to
address this issue the point cloud completion task aims to predict the complete
shapes of objects from partial observations. Previous works often adapt an
encoder-decoder architecture, where the encoder is trained to extract
embeddings that are used as inputs to generate predictions from the decoder.
However, the learned embeddings have sparse distribution in the feature space,
which leads to worse generalization results during testing. To address these
problems, this paper proposes a hyperspherical module, which transforms and
normalizes embeddings from the encoder to be on a unit hypersphere. With the
proposed module, the magnitude and direction of the output hyperspherical
embedding are decoupled and only the directional information is optimized. We
theoretically analyze the hyperspherical embedding and show that it enables
more stable training with a wider range of learning rates and more compact
embedding distributions. Experiment results show consistent improvement of
point cloud completion in both single-task and multi-task learning, which
demonstrates the effectiveness of the proposed method.
Related papers
- A Fresh Take on Stale Embeddings: Improving Dense Retriever Training with Corrector Networks [81.2624272756733]
In dense retrieval, deep encoders provide embeddings for both inputs and targets.
We train a small parametric corrector network that adjusts stale cached target embeddings.
Our approach matches state-of-the-art results even when no target embedding updates are made during training.
arXiv Detail & Related papers (2024-09-03T13:29:13Z) - IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images [50.4538089115248]
Generalizable 3D object reconstruction from single-view RGB-D images remains a challenging task.
We propose a novel approach, IPoD, which harmonizes implicit field learning with point diffusion.
Experiments conducted on the CO3D-v2 dataset affirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6% in Chamfer distance over existing methods.
arXiv Detail & Related papers (2024-03-30T07:17:37Z) - Learning Signed Hyper Surfaces for Oriented Point Cloud Normal Estimation [53.19926259132379]
We propose a novel method called SHS-Net for oriented normal estimation of point clouds by learning signed hyper surfaces.
The signed hyper surfaces are implicitly learned in a high-dimensional feature space where the local and global information is aggregated.
An attention-weighted normal prediction module is proposed as a decoder, which takes the local and global latent codes as input to predict oriented normals.
arXiv Detail & Related papers (2023-05-10T03:40:25Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Upsampling Autoencoder for Self-Supervised Point Cloud Learning [11.19408173558718]
We propose a self-supervised pretraining model for point cloud learning without human annotations.
Upsampling operation encourages the network to capture both high-level semantic information and low-level geometric information of the point cloud.
We find that our UAE outperforms previous state-of-the-art methods in shape classification, part segmentation and point cloud upsampling tasks.
arXiv Detail & Related papers (2022-03-21T07:20:37Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - Point Cloud Pre-training by Mixing and Disentangling [35.18101910728478]
Mixing and Disentangling (MD) is a self-supervised learning approach for point cloud pre-training.
We show that the encoder + ours (MD) significantly surpasses that of the encoder trained from scratch and converges quickly.
We hope this self-supervised learning attempt on point clouds can pave the way for reducing the deeply-learned model dependence on large-scale labeled data.
arXiv Detail & Related papers (2021-09-01T15:52:18Z) - Data-driven Cloud Clustering via a Rotationally Invariant Autoencoder [10.660968055962325]
We describe an automated rotation-invariant cloud clustering (RICC) method.
It organizes cloud imagery within large datasets in an unsupervised fashion.
Results suggest that the resultant cloud clusters capture meaningful aspects of cloud physics.
arXiv Detail & Related papers (2021-03-08T16:45:14Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.