HyperPocket: Generative Point Cloud Completion
- URL: http://arxiv.org/abs/2102.05973v1
- Date: Thu, 11 Feb 2021 12:30:03 GMT
- Title: HyperPocket: Generative Point Cloud Completion
- Authors: Przemys{\l}aw Spurek, Artur Kasymov, Marcin Mazur, Diana Janik,
S{\l}awomir Tadeja, {\L}ukasz Struski, Jacek Tabor, Tomasz Trzci\'nski
- Abstract summary: We introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations.
We leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts.
Our method offers competitive performances to the other state-of-the-art models.
- Score: 19.895219420937938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scanning real-life scenes with modern registration devices typically give
incomplete point cloud representations, mostly due to the limitations of the
scanning process and 3D occlusions. Therefore, completing such partial
representations remains a fundamental challenge of many computer vision
applications. Most of the existing approaches aim to solve this problem by
learning to reconstruct individual 3D objects in a synthetic setup of an
uncluttered environment, which is far from a real-life scenario. In this work,
we reformulate the problem of point cloud completion into an object
hallucination task. Thus, we introduce a novel autoencoder-based architecture
called HyperPocket that disentangles latent representations and, as a result,
enables the generation of multiple variants of the completed 3D point clouds.
We split point cloud processing into two disjoint data streams and leverage a
hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the
missing object parts. As a result, the generated point clouds are not only
smooth but also plausible and geometrically consistent with the scene. Our
method offers competitive performances to the other state-of-the-art models,
and it enables a~plethora of novel applications.
Related papers
- GPN: Generative Point-based NeRF [0.65268245109828]
We propose using Generative Point-based NeRF (GPN) to reconstruct and repair a partial cloud.
The repaired point cloud can achieve multi-view consistency with the captured images at high spatial resolution.
arXiv Detail & Related papers (2024-04-12T08:14:17Z) - 3DMambaComplete: Exploring Structured State Space Model for Point Cloud Completion [19.60626235337542]
3DMambaComplete is a point cloud completion network built on the novel Mamba framework.
It encodes point cloud features using Mamba's selection mechanism and predicts a set of Hyperpoints.
A deformation method transforms the 2D mesh representation of HyperPoints into a fine-grained 3D structure for point cloud reconstruction.
arXiv Detail & Related papers (2024-04-10T15:45:03Z) - Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach [83.05340155068721]
We devise a new 3d point cloud generation framework using a divide-and-conquer approach.
All patch generators are based on learnable priors, which aim to capture the information of geometry primitives.
Experimental results on a variety of object categories from the most popular point cloud dataset, ShapeNet, show the effectiveness of the proposed patch-wise point cloud generation.
arXiv Detail & Related papers (2023-07-22T11:10:39Z) - 4DSR-GCN: 4D Video Point Cloud Upsampling using Graph Convolutional
Networks [29.615723135027096]
We propose a new solution for upscaling and restoration of time-varying 3D video point clouds after they have been compressed.
Our model consists of a specifically designed Graph Convolutional Network (GCN) that combines Dynamic Edge Convolution and Graph Attention Networks.
arXiv Detail & Related papers (2023-06-01T18:43:16Z) - Point Cloud Registration of non-rigid objects in sparse 3D Scans with
applications in Mixed Reality [0.0]
We study the problem of non-rigid point cloud registration for use cases in the Augmented/Mixed Reality domain.
We focus our attention on a special class of non-rigid deformations that happen in rigid objects with parts that move relative to one another.
We propose an efficient and robust point-cloud registration workflow for such objects and evaluate it on real-world data collected using Microsoft Hololens 2.
arXiv Detail & Related papers (2022-12-07T18:54:32Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.