CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised
Point Cloud Learning
- URL: http://arxiv.org/abs/2201.08215v1
- Date: Thu, 20 Jan 2022 15:04:12 GMT
- Title: CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised
Point Cloud Learning
- Authors: Mingye Xu, Zhipeng Zhou, Hongbin Xu, Yali Wang, and Yu Qiao
- Abstract summary: We propose a generic Contour-Perturbed Reconstruction Network (CP-Net), which can effectively guide self-supervised reconstruction to learn semantic content in the point cloud.
For classification, we get a competitive result with the fully-supervised methods on ModelNet40 (92.5% accuracy) and ScanObjectNN (87.9% accuracy)
- Score: 53.1436669083784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning has not been fully explored for point cloud
analysis. Current frameworks are mainly based on point cloud reconstruction.
Given only 3D coordinates, such approaches tend to learn local geometric
structures and contours, while failing in understanding high level semantic
content. Consequently, they achieve unsatisfactory performance in downstream
tasks such as classification, segmentation, etc. To fill this gap, we propose a
generic Contour-Perturbed Reconstruction Network (CP-Net), which can
effectively guide self-supervised reconstruction to learn semantic content in
the point cloud, and thus promote discriminative power of point cloud
representation. First, we introduce a concise contour-perturbed augmentation
module for point cloud reconstruction. With guidance of geometry disentangling,
we divide point cloud into contour and content components. Subsequently, we
perturb the contour components and preserve the content components on the point
cloud. As a result, self supervisor can effectively focus on semantic content,
by reconstructing the original point cloud from such perturbed one. Second, we
use this perturbed reconstruction as an assistant branch, to guide the learning
of basic reconstruction branch via a distinct dual-branch consistency loss. In
this case, our CP-Net not only captures structural contour but also learn
semantic content for discriminative downstream tasks. Finally, we perform
extensive experiments on a number of point cloud benchmarks. Part segmentation
results demonstrate that our CP-Net (81.5% of mIoU) outperforms the previous
self-supervised models, and narrows the gap with the fully-supervised methods.
For classification, we get a competitive result with the fully-supervised
methods on ModelNet40 (92.5% accuracy) and ScanObjectNN (87.9% accuracy). The
codes and models will be released afterwards.
Related papers
- Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - Point cloud completion on structured feature map with feedback network [28.710494879042002]
We propose FSNet, a feature structuring module that can adaptively aggregate point-wise features into a 2D structured feature map.
A 2D convolutional neural network is adopted to decode feature maps from FSNet into a coarse and complete point cloud.
A point cloud upsampling network is used to generate dense point cloud from the partial input and the coarse intermediate output.
arXiv Detail & Related papers (2022-02-17T10:59:40Z) - Self-Supervised Feature Learning from Partial Point Clouds via Pose
Disentanglement [35.404285596482175]
We propose a novel self-supervised framework to learn informative representations from partial point clouds.
We leverage partial point clouds scanned by LiDAR that contain both content and pose attributes.
Our method not only outperforms existing self-supervised methods, but also shows a better generalizability across synthetic and real-world datasets.
arXiv Detail & Related papers (2022-01-09T14:12:50Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z) - Multi-scale Receptive Fields Graph Attention Network for Point Cloud
Classification [35.88116404702807]
The proposed MRFGAT architecture is tested on ModelNet10 and ModelNet40 datasets.
Results show it achieves state-of-the-art performance in shape classification tasks.
arXiv Detail & Related papers (2020-09-28T13:01:28Z) - TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly
Representations [20.318695890515613]
We propose an autoencoder, TearingNet, which tackles the challenging task of representing point clouds using a fixed-length descriptor.
Our TearingNet is characterized by a proposed Tearing network module and a Folding network module interacting with each other iteratively.
Experimentation shows the superiority of our proposal in terms of reconstructing point clouds as well as generating more topology-friendly representations than benchmarks.
arXiv Detail & Related papers (2020-06-17T22:42:43Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.