DCS-Net: Pioneering Leakage-Free Point Cloud Pretraining Framework with
Global Insights
- URL: http://arxiv.org/abs/2402.02088v1
- Date: Sat, 3 Feb 2024 08:58:23 GMT
- Title: DCS-Net: Pioneering Leakage-Free Point Cloud Pretraining Framework with
Global Insights
- Authors: Zhe Li, Zhangyang Gao, Cheng Tan, Stan Z. Li, Laurence T. Yang
- Abstract summary: We introduce a novel solution called the Differentiable Center Sampling Network (DCS-Net)
It tackles the information leakage problem by incorporating both global feature reconstruction and local feature reconstruction as non-trivial proxy tasks.
Experimental results demonstrate that our method enhances the expressive capacity of existing point cloud models.
- Score: 55.051626723729896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Masked autoencoding and generative pretraining have achieved remarkable
success in computer vision and natural language processing, and more recently,
they have been extended to the point cloud domain. Nevertheless, existing point
cloud models suffer from the issue of information leakage due to the
pre-sampling of center points, which leads to trivial proxy tasks for the
models. These approaches primarily focus on local feature reconstruction,
limiting their ability to capture global patterns within point clouds. In this
paper, we argue that the reduced difficulty of pretext tasks hampers the
model's capacity to learn expressive representations. To address these
limitations, we introduce a novel solution called the Differentiable Center
Sampling Network (DCS-Net). It tackles the information leakage problem by
incorporating both global feature reconstruction and local feature
reconstruction as non-trivial proxy tasks, enabling simultaneous learning of
both the global and local patterns within point cloud. Experimental results
demonstrate that our method enhances the expressive capacity of existing point
cloud models and effectively addresses the issue of information leakage.
Related papers
- PointMoment:Mixed-Moment-based Self-Supervised Representation Learning
for 3D Point Clouds [11.980787751027872]
We propose PointMoment, a novel framework for point cloud self-supervised representation learning.
Our framework does not require any special techniques such as asymmetric network architectures, gradient stopping, etc.
arXiv Detail & Related papers (2023-12-06T08:49:55Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models [105.83595545314334]
We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
arXiv Detail & Related papers (2023-03-14T14:25:29Z) - Shrinking unit: a Graph Convolution-Based Unit for CNN-like 3D Point
Cloud Feature Extractors [0.0]
We argue that a lack of inspiration from the image domain might be the primary cause of such a gap.
We propose a graph convolution-based unit, dubbed Shrinking unit, that can be stacked vertically and horizontally for the design of CNN-like 3D point cloud feature extractors.
arXiv Detail & Related papers (2022-09-26T15:28:31Z) - Upsampling Autoencoder for Self-Supervised Point Cloud Learning [11.19408173558718]
We propose a self-supervised pretraining model for point cloud learning without human annotations.
Upsampling operation encourages the network to capture both high-level semantic information and low-level geometric information of the point cloud.
We find that our UAE outperforms previous state-of-the-art methods in shape classification, part segmentation and point cloud upsampling tasks.
arXiv Detail & Related papers (2022-03-21T07:20:37Z) - Self-Supervised Feature Learning from Partial Point Clouds via Pose
Disentanglement [35.404285596482175]
We propose a novel self-supervised framework to learn informative representations from partial point clouds.
We leverage partial point clouds scanned by LiDAR that contain both content and pose attributes.
Our method not only outperforms existing self-supervised methods, but also shows a better generalizability across synthetic and real-world datasets.
arXiv Detail & Related papers (2022-01-09T14:12:50Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.