View-Guided Point Cloud Completion
- URL: http://arxiv.org/abs/2104.05666v2
- Date: Tue, 13 Apr 2021 04:43:04 GMT
- Title: View-Guided Point Cloud Completion
- Authors: Xuancheng Zhang, Yutong Feng, Siqi Li, Changqing Zou, Hai Wan, Xibin
Zhao, Yandong Guo, Yue Gao
- Abstract summary: ViPC (view-guided point cloud completion) takes the missing crucial global structure information from an extra single-view image.
Our method achieves significantly superior results over typical existing solutions on a new large-scale dataset.
- Score: 43.139758470826806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a view-guided solution for the task of point cloud
completion. Unlike most existing methods directly inferring the missing points
using shape priors, we address this task by introducing ViPC (view-guided point
cloud completion) that takes the missing crucial global structure information
from an extra single-view image. By leveraging a framework that sequentially
performs effective cross-modality and cross-level fusions, our method achieves
significantly superior results over typical existing solutions on a new
large-scale dataset we collect for the view-guided point cloud completion task.
Related papers
- Explicitly Guided Information Interaction Network for Cross-modal Point Cloud Completion [34.102157812175854]
We introduce EGIInet (Explicitly Guided Information Interaction Network), a model for View-guided Point cloud Completion task.
EGIInet efficiently combines the information from two modalities by leveraging the geometric nature of the completion task.
We propose a novel explicitly guided information interaction strategy that could help the network identify critical information within images.
arXiv Detail & Related papers (2024-07-03T08:03:56Z) - Point Cloud Completion Guided by Prior Knowledge via Causal Inference [19.935868881427226]
We propose a novel approach to point cloud completion task called Point-PC.
Point-PC uses a memory network to retrieve shape priors and designs a causal inference model to filter missing shape information.
Experimental results on the ShapeNet-55, PCN, and KITTI datasets demonstrate that Point-PC outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-05-28T16:33:35Z) - FBNet: Feedback Network for Point Cloud Completion [35.89264923599902]
We propose a novel Feedback Network (FBNet) for point cloud completion, in which present features are efficiently refined by rerouting subsequent fine-grained ones.
The main challenge of building feedback connections is the mismatching between present and subsequent features.
To address this, the elaborately designed point Cross Transformer exploits efficient information from feedback features via cross attention strategy.
arXiv Detail & Related papers (2022-10-08T09:12:37Z) - Cross-modal Learning for Image-Guided Point Cloud Shape Completion [23.779985842891705]
We show how it is possible to combine the information from the two modalities in a localized latent space.
We also investigate a novel weakly-supervised setting where the auxiliary image provides a supervisory signal.
Experiments show significant improvements over state-of-the-art supervised methods for both unimodal and multimodal completion.
arXiv Detail & Related papers (2022-09-20T08:37:05Z) - Voxel-based Network for Shape Completion by Leveraging Edge Generation [76.23436070605348]
We develop a voxel-based network for point cloud completion by leveraging edge generation (VE-PCN)
We first embed point clouds into regular voxel grids, and then generate complete objects with the help of the hallucinated shape edges.
This decoupled architecture together with a multi-scale grid feature learning is able to generate more realistic on-surface details.
arXiv Detail & Related papers (2021-08-23T05:10:29Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Cascaded Refinement Network for Point Cloud Completion with
Self-supervision [74.80746431691938]
We introduce a two-branch network for shape completion.
The first branch is a cascaded shape completion sub-network to synthesize complete objects.
The second branch is an auto-encoder to reconstruct the original partial input.
arXiv Detail & Related papers (2020-10-17T04:56:22Z) - Point Cloud Completion by Skip-attention Network with Hierarchical
Folding [61.59710288271434]
We propose Skip-Attention Network (SA-Net) for 3D point cloud completion.
First, we propose a skip-attention mechanism to effectively exploit the local structure details of incomplete point clouds.
Second, in order to fully utilize the selected geometric information encoded by skip-attention mechanism at different resolutions, we propose a novel structure-preserving decoder.
arXiv Detail & Related papers (2020-05-08T06:23:51Z) - Cascaded Refinement Network for Point Cloud Completion [74.80746431691938]
We propose a cascaded refinement network together with a coarse-to-fine strategy to synthesize the detailed object shapes.
Considering the local details of partial input with the global shape information together, we can preserve the existing details in the incomplete point set.
We also design a patch discriminator that guarantees every local area has the same pattern with the ground truth to learn the complicated point distribution.
arXiv Detail & Related papers (2020-04-07T13:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.