GP-PCS: One-shot Feature-Preserving Point Cloud Simplification with Gaussian Processes on Riemannian Manifolds
- URL: http://arxiv.org/abs/2303.15225v4
- Date: Sat, 7 Sep 2024 10:41:35 GMT
- Title: GP-PCS: One-shot Feature-Preserving Point Cloud Simplification with Gaussian Processes on Riemannian Manifolds
- Authors: Stuti Pathak, Thomas M. McDonald, Seppe Sels, Rudi Penne,
- Abstract summary: We propose a novel, one-shot point cloud simplification method.
It preserves both the salient structural features and the overall shape of a point cloud without any prior surface reconstruction step.
We evaluate our method on several benchmark and self-acquired point clouds, compare it to a range of existing methods, demonstrate its application in downstream tasks of registration and surface reconstruction.
- Score: 2.8811433060309763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The processing, storage and transmission of large-scale point clouds is an ongoing challenge in the computer vision community which hinders progress in the application of 3D models to real-world settings, such as autonomous driving, virtual reality and remote sensing. We propose a novel, one-shot point cloud simplification method which preserves both the salient structural features and the overall shape of a point cloud without any prior surface reconstruction step. Our method employs Gaussian processes suitable for functions defined on Riemannian manifolds, allowing us to model the surface variation function across any given point cloud. A simplified version of the original cloud is obtained by sequentially selecting points using a greedy sparsification scheme. The selection criterion used for this scheme ensures that the simplified cloud best represents the surface variation of the original point cloud. We evaluate our method on several benchmark and self-acquired point clouds, compare it to a range of existing methods, demonstrate its application in downstream tasks of registration and surface reconstruction, and show that our method is competitive both in terms of empirical performance and computational efficiency. The code is available at \href{https://github.com/stutipathak5/gps-for-point-clouds}{https://github.com/stutipathak5/gps-for-point-clouds}.
Related papers
- Joint Point Cloud Upsampling and Cleaning with Octree-based CNNs [12.727392181530229]
We present a simple yet efficient method for jointly upsampling and cleaning point clouds.
Our method leverages an off-the-shelf octree-based 3D U-Net (OUNet) with minor modifications, enabling the upsampling and cleaning tasks within a single network.
Our network directly processes each input point cloud as a whole instead of processing each point cloud patch as in previous works, which significantly eases the implementation and brings at least 47 times faster inference.
arXiv Detail & Related papers (2024-10-22T13:23:05Z) - INPC: Implicit Neural Point Clouds for Radiance Field Rendering [5.64500060725726]
We introduce a new approach for reconstruction and novel-view synthesis of real-world scenes.
We propose a hybrid scene representation, which implicitly encodes a point cloud in a continuous octree-based probability field and a multi-resolution hash grid.
Our method achieves fast inference at interactive frame rates, and can extract explicit point clouds to further enhance performance.
arXiv Detail & Related papers (2024-03-25T15:26:32Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware
Transformers [94.11915008006483]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Our method attains 6.53 CD on PCN, 0.81 CD on ShapeNet-55 and 0.392 MMD on real-world KITTI.
arXiv Detail & Related papers (2023-01-11T16:14:12Z) - Upsampling Autoencoder for Self-Supervised Point Cloud Learning [11.19408173558718]
We propose a self-supervised pretraining model for point cloud learning without human annotations.
Upsampling operation encourages the network to capture both high-level semantic information and low-level geometric information of the point cloud.
We find that our UAE outperforms previous state-of-the-art methods in shape classification, part segmentation and point cloud upsampling tasks.
arXiv Detail & Related papers (2022-03-21T07:20:37Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - Representing Point Clouds with Generative Conditional Invertible Flow
Networks [15.280751949071016]
We propose a simple yet effective method to represent point clouds as sets of samples drawn from a cloud-specific probability distribution.
We show that our method leverages generative invertible flow networks to learn embeddings as well as to generate point clouds.
Our model offers competitive or superior quantitative results on benchmark datasets.
arXiv Detail & Related papers (2020-10-07T18:30:47Z) - SoftPoolNet: Shape Descriptor for Point Cloud Completion and
Classification [93.54286830844134]
We propose a method for 3D object completion and classification based on point clouds.
For the decoder stage, we propose regional convolutions, a novel operator aimed at maximizing the global activation entropy.
We evaluate our approach on different 3D tasks such as object completion and classification, achieving state-of-the-art accuracy.
arXiv Detail & Related papers (2020-08-17T14:32:35Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.