SageMix: Saliency-Guided Mixup for Point Clouds
- URL: http://arxiv.org/abs/2210.06944v1
- Date: Thu, 13 Oct 2022 12:19:58 GMT
- Title: SageMix: Saliency-Guided Mixup for Point Clouds
- Authors: Sanghyeok Lee, Minkyu Jeon, Injae Kim, Yunyang Xiong, Hyunwoo J. Kim
- Abstract summary: We propose SageMix, a saliency-guided Mixup for point clouds to preserve salient local structures.
With PointNet++, our method achieves an accuracy gain of 2.6% and 4.0% over standard training in 3D Warehouse dataset (MN40) and ScanObjectNN, respectively.
- Score: 14.94694648742664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation is key to improving the generalization ability of deep
learning models. Mixup is a simple and widely-used data augmentation technique
that has proven effective in alleviating the problems of overfitting and data
scarcity. Also, recent studies of saliency-aware Mixup in the image domain show
that preserving discriminative parts is beneficial to improving the
generalization performance. However, these Mixup-based data augmentations are
underexplored in 3D vision, especially in point clouds. In this paper, we
propose SageMix, a saliency-guided Mixup for point clouds to preserve salient
local structures. Specifically, we extract salient regions from two point
clouds and smoothly combine them into one continuous shape. With a simple
sequential sampling by re-weighted saliency scores, SageMix preserves the local
structure of salient regions. Extensive experiments demonstrate that the
proposed method consistently outperforms existing Mixup methods in various
benchmark point cloud datasets. With PointNet++, our method achieves an
accuracy gain of 2.6% and 4.0% over standard training in 3D Warehouse dataset
(MN40) and ScanObjectNN, respectively. In addition to generalization
performance, SageMix improves robustness and uncertainty calibration. Moreover,
when adopting our method to various tasks including part segmentation and
standard 2D image classification, our method achieves competitive performance.
Related papers
- MM-Mixing: Multi-Modal Mixing Alignment for 3D Understanding [64.65145700121442]
We introduce MM-Mixing, a multi-modal mixing alignment framework for 3D understanding.
Our proposed two-stage training pipeline combines feature-level and input-level mixing to optimize the 3D encoder.
We demonstrate that MM-Mixing significantly improves baseline performance across various learning scenarios.
arXiv Detail & Related papers (2024-05-28T18:44:15Z) - Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - PointPatchMix: Point Cloud Mixing with Patch Scoring [58.58535918705736]
We propose PointPatchMix, which mixes point clouds at the patch level and generates content-based targets for mixed point clouds.
Our approach preserves local features at the patch level, while the patch scoring module assigns targets based on the content-based significance score from a pre-trained teacher model.
With Point-MAE as our baseline, our model surpasses previous methods by a significant margin, achieving 86.3% accuracy on ScanObjectNN and 94.1% accuracy on ModelNet40.
arXiv Detail & Related papers (2023-03-12T14:49:42Z) - DoubleMix: Simple Interpolation-Based Data Augmentation for Text
Classification [56.817386699291305]
This paper proposes a simple yet effective data augmentation approach termed DoubleMix.
DoubleMix first generates several perturbed samples for each training data.
It then uses the perturbed data and original data to carry out a two-step in the hidden space of neural models.
arXiv Detail & Related papers (2022-09-12T15:01:04Z) - CoSMix: Compositional Semantic Mix for Domain Adaptation in 3D LiDAR
Segmentation [62.259239847977014]
We propose a new approach of sample mixing for point cloud UDA, namely Compositional Semantic Mix (CoSMix)
CoSMix consists of a two-branch symmetric network that can process labelled synthetic data (source) and real-world unlabelled point clouds (target) concurrently.
We evaluate CoSMix on two large-scale datasets, showing that it outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-07-20T09:33:42Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - Mixture-based Feature Space Learning for Few-shot Image Classification [6.574517227976925]
We propose to model base classes with mixture models by simultaneously training the feature extractor and learning the mixture model parameters in an online manner.
Results in a richer and more discriminative feature space which can be employed to classify novel examples from very few samples.
arXiv Detail & Related papers (2020-11-24T03:16:27Z) - ROAM: Random Layer Mixup for Semi-Supervised Learning in Medical Imaging [43.26668942258135]
Medical image segmentation is one of the major challenges addressed by machine learning methods.
We propose ROAM, a RandOm lAyer Mixup, which generates more data points that have never seen before.
ROAM achieves state-of-the-art (SOTA) results in fully supervised (89.5%) and semi-supervised (87.0%) settings with a relative improvement of up to 2.40% and 16.50%, respectively for the whole-brain segmentation.
arXiv Detail & Related papers (2020-03-20T18:07:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.