Learning-Based Biharmonic Augmentation for Point Cloud Classification
- URL: http://arxiv.org/abs/2311.06070v1
- Date: Fri, 10 Nov 2023 14:04:49 GMT
- Title: Learning-Based Biharmonic Augmentation for Point Cloud Classification
- Authors: Jiacheng Wei, Guosheng Lin, Henghui Ding, Jie Hu, Kim-Hui Yap
- Abstract summary: Biharmonic Augmentation (BA) is a novel and efficient data augmentation technique.
BA diversifies point cloud data by imposing smooth non-rigid deformations on existing 3D structures.
We present AdvTune, an advanced online augmentation system that integrates adversarial training.
- Score: 79.13962913099378
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point cloud datasets often suffer from inadequate sample sizes in comparison
to image datasets, making data augmentation challenging. While traditional
methods, like rigid transformations and scaling, have limited potential in
increasing dataset diversity due to their constraints on altering individual
sample shapes, we introduce the Biharmonic Augmentation (BA) method. BA is a
novel and efficient data augmentation technique that diversifies point cloud
data by imposing smooth non-rigid deformations on existing 3D structures. This
approach calculates biharmonic coordinates for the deformation function and
learns diverse deformation prototypes. Utilizing a CoefNet, our method predicts
coefficients to amalgamate these prototypes, ensuring comprehensive
deformation. Moreover, we present AdvTune, an advanced online augmentation
system that integrates adversarial training. This system synergistically
refines the CoefNet and the classification network, facilitating the automated
creation of adaptive shape deformations contingent on the learner status.
Comprehensive experimental analysis validates the superiority of Biharmonic
Augmentation, showcasing notable performance improvements over prevailing point
cloud augmentation techniques across varied network designs.
Related papers
- A Simple Background Augmentation Method for Object Detection with Diffusion Model [53.32935683257045]
In computer vision, it is well-known that a lack of data diversity will impair model performance.
We propose a simple yet effective data augmentation approach by leveraging advancements in generative models.
Background augmentation, in particular, significantly improves the models' robustness and generalization capabilities.
arXiv Detail & Related papers (2024-08-01T07:40:00Z) - Robustifying Generalizable Implicit Shape Networks with a Tunable
Non-Parametric Model [10.316008740970037]
Generalizable models for implicit shape reconstruction from unoriented point cloud suffer from generalization issues.
We propose here an efficient mechanism to remedy some of these limitations at test time.
We demonstrate the improvement obtained through our method with respect to baselines and the state-of-the-art using synthetic and real data.
arXiv Detail & Related papers (2023-11-21T20:12:29Z) - Training on Thin Air: Improve Image Classification with Generated Data [28.96941414724037]
Diffusion Inversion is a simple yet effective method to generate diverse, high-quality training data for image classification.
Our approach captures the original data distribution and ensures data coverage by inverting images to the latent space of Stable Diffusion.
We identify three key components that allow our generated images to successfully supplant the original dataset.
arXiv Detail & Related papers (2023-05-24T16:33:02Z) - AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware
Transformers [94.11915008006483]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Our method attains 6.53 CD on PCN, 0.81 CD on ShapeNet-55 and 0.392 MMD on real-world KITTI.
arXiv Detail & Related papers (2023-01-11T16:14:12Z) - Joint Data and Feature Augmentation for Self-Supervised Representation
Learning on Point Clouds [4.723757543677507]
We propose a fusion contrastive learning framework to combine data augmentations in Euclidean space and feature augmentations in feature space.
We conduct extensive object classification experiments and object part segmentation experiments to validate the transferability of the proposed framework.
Experimental results demonstrate that the proposed framework is effective to learn the point cloud representation in a self-supervised manner.
arXiv Detail & Related papers (2022-11-02T14:58:03Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Graph-Guided Deformation for Point Cloud Completion [35.10606375236494]
We propose a Graph-Guided Deformation Network, which respectively regards the input data and intermediate generation as controlling and supporting points.
Our key insight is to simulate the least square Laplacian deformation process via mesh deformation methods, which brings adaptivity for modeling variation in geometry details.
We are the first to refine the point cloud completion task by mimicing traditional graphics algorithms with GCN-guided deformation.
arXiv Detail & Related papers (2021-11-11T12:55:26Z) - Point Cloud Augmentation with Weighted Local Transformations [14.644850090688406]
We propose a simple and effective augmentation method called PointWOLF for point cloud augmentation.
The proposed method produces smoothly varying non-rigid deformations by locally weighted transformations centered at multiple anchor points.
AugTune generates augmented samples of desired difficulties producing targeted confidence scores.
arXiv Detail & Related papers (2021-10-11T16:11:26Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.