PointPatchMix: Point Cloud Mixing with Patch Scoring
- URL: http://arxiv.org/abs/2303.06678v1
- Date: Sun, 12 Mar 2023 14:49:42 GMT
- Title: PointPatchMix: Point Cloud Mixing with Patch Scoring
- Authors: Yi Wang, Jiaze Wang, Jinpeng Li, Zixu Zhao, Guangyong Chen, Anfeng Liu
and Pheng-Ann Heng
- Abstract summary: We propose PointPatchMix, which mixes point clouds at the patch level and generates content-based targets for mixed point clouds.
Our approach preserves local features at the patch level, while the patch scoring module assigns targets based on the content-based significance score from a pre-trained teacher model.
With Point-MAE as our baseline, our model surpasses previous methods by a significant margin, achieving 86.3% accuracy on ScanObjectNN and 94.1% accuracy on ModelNet40.
- Score: 58.58535918705736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is an effective regularization strategy for mitigating
overfitting in deep neural networks, and it plays a crucial role in 3D vision
tasks, where the point cloud data is relatively limited. While mixing-based
augmentation has shown promise for point clouds, previous methods mix point
clouds either on block level or point level, which has constrained their
ability to strike a balance between generating diverse training samples and
preserving the local characteristics of point clouds. Additionally, the varying
importance of each part of the point clouds has not been fully considered,
cause not all parts contribute equally to the classification task, and some
parts may contain unimportant or redundant information. To overcome these
challenges, we propose PointPatchMix, a novel approach that mixes point clouds
at the patch level and integrates a patch scoring module to generate
content-based targets for mixed point clouds. Our approach preserves local
features at the patch level, while the patch scoring module assigns targets
based on the content-based significance score from a pre-trained teacher model.
We evaluate PointPatchMix on two benchmark datasets, ModelNet40 and
ScanObjectNN, and demonstrate significant improvements over various baselines
in both synthetic and real-world datasets, as well as few-shot settings. With
Point-MAE as our baseline, our model surpasses previous methods by a
significant margin, achieving 86.3% accuracy on ScanObjectNN and 94.1% accuracy
on ModelNet40. Furthermore, our approach shows strong generalization across
multiple architectures and enhances the robustness of the baseline model.
Related papers
- Point Cloud Understanding via Attention-Driven Contrastive Learning [64.65145700121442]
Transformer-based models have advanced point cloud understanding by leveraging self-attention mechanisms.
PointACL is an attention-driven contrastive learning framework designed to address these limitations.
Our method employs an attention-driven dynamic masking strategy that guides the model to focus on under-attended regions.
arXiv Detail & Related papers (2024-11-22T05:41:00Z) - Enhancing Robustness to Noise Corruption for Point Cloud Recognition via Spatial Sorting and Set-Mixing Aggregation Module [17.588975042641007]
We propose Set-Mixer, a noise-robust aggregation module to mitigate the influence of individual noise points.
Experiments conducted on ModelNet40-C indicate that Set-Mixer significantly enhances the model performance on noisy point clouds.
arXiv Detail & Related papers (2024-07-15T15:21:34Z) - Point Cloud Mamba: Point Cloud Learning via State Space Model [73.7454734756626]
We show that Mamba-based point cloud methods can outperform previous methods based on transformer or multi-layer perceptrons (MLPs)
In particular, we demonstrate that Mamba-based point cloud methods can outperform previous methods based on transformer or multi-layer perceptrons (MLPs)
Point Cloud Mamba surpasses the state-of-the-art (SOTA) point-based method PointNeXt and achieves new SOTA performance on the ScanNN, ModelNet40, ShapeNetPart, and S3DIS datasets.
arXiv Detail & Related papers (2024-03-01T18:59:03Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - GeoMAE: Masked Geometric Target Prediction for Self-supervised Point
Cloud Pre-Training [16.825524577372473]
We introduce a point cloud representation learning framework, based on geometric feature reconstruction.
We identify three self-supervised learning objectives to peculiar point clouds, namely centroid prediction, normal estimation, and curvature prediction.
Our pipeline is conceptually simple and it consists of two major steps: first, it randomly masks out groups of points, followed by a Transformer-based point cloud encoder.
arXiv Detail & Related papers (2023-05-15T17:14:55Z) - SageMix: Saliency-Guided Mixup for Point Clouds [14.94694648742664]
We propose SageMix, a saliency-guided Mixup for point clouds to preserve salient local structures.
With PointNet++, our method achieves an accuracy gain of 2.6% and 4.0% over standard training in 3D Warehouse dataset (MN40) and ScanObjectNN, respectively.
arXiv Detail & Related papers (2022-10-13T12:19:58Z) - Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point
Modeling [104.82953953453503]
We present Point-BERT, a new paradigm for learning Transformers to generalize the concept of BERT to 3D point cloud.
Experiments demonstrate that the proposed BERT-style pre-training strategy significantly improves the performance of standard point cloud Transformers.
arXiv Detail & Related papers (2021-11-29T18:59:03Z) - PointCutMix: Regularization Strategy for Point Cloud Classification [7.6904253666422395]
We propose a simple and effective augmentation method for the point cloud data, named PointCutMix.
It finds the optimal assignment between two point clouds and generates new training data by replacing the points in one sample with their optimal assigned pairs.
arXiv Detail & Related papers (2021-01-05T11:39:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.