Point-Voxel Adaptive Feature Abstraction for Robust Point Cloud
Classification
- URL: http://arxiv.org/abs/2210.15514v2
- Date: Sun, 30 Oct 2022 03:43:05 GMT
- Title: Point-Voxel Adaptive Feature Abstraction for Robust Point Cloud
Classification
- Authors: Lifa Zhu, Changwei Lin, Chen Zheng, Ninghua Yang
- Abstract summary: We propose Point-Voxel based Adaptive (PV-Ada) for robust point cloud classification under various corruptions.
Experiments on ModelNet-C dataset demonstrate that PV-Ada outperforms the state-of-the-art methods.
- Score: 6.40412293456886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Great progress has been made in point cloud classification with
learning-based methods. However, complex scene and sensor inaccuracy in
real-world application make point cloud data suffer from corruptions, such as
occlusion, noise and outliers. In this work, we propose Point-Voxel based
Adaptive (PV-Ada) feature abstraction for robust point cloud classification
under various corruptions. Specifically, the proposed framework iteratively
voxelize the point cloud and extract point-voxel feature with shared local
encoding and Transformer. Then, adaptive max-pooling is proposed to robustly
aggregate the point cloud feature for classification. Experiments on ModelNet-C
dataset demonstrate that PV-Ada outperforms the state-of-the-art methods. In
particular, we rank the $2^{nd}$ place in ModelNet-C classification track of
PointCloud-C Challenge 2022, with Overall Accuracy (OA) being 0.865. Code will
be available at https://github.com/zhulf0804/PV-Ada.
Related papers
- P2P-Bridge: Diffusion Bridges for 3D Point Cloud Denoising [81.92854168911704]
We tackle the task of point cloud denoising through a novel framework that adapts Diffusion Schr"odinger bridges to points clouds.
Experiments on object datasets show that P2P-Bridge achieves significant improvements over existing methods.
arXiv Detail & Related papers (2024-08-29T08:00:07Z) - Trainable Pointwise Decoder Module for Point Cloud Segmentation [12.233802912441476]
Point cloud segmentation (PCS) aims to make per-point predictions and enables robots and autonomous driving cars to understand the environment.
We propose a trainable pointwise decoder module (PDM) as the post-processing approach.
We also introduce a virtual range image-guided copy-rotate-paste strategy in data augmentation.
arXiv Detail & Related papers (2024-08-02T19:29:35Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - FreePoint: Unsupervised Point Cloud Instance Segmentation [72.64540130803687]
We propose FreePoint, for underexplored unsupervised class-agnostic instance segmentation on point clouds.
We represent point features by combining coordinates, colors, and self-supervised deep features.
Based on the point features, we segment point clouds into coarse instance masks as pseudo labels, which are used to train a point cloud instance segmentation model.
arXiv Detail & Related papers (2023-05-11T16:56:26Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - Point-Syn2Real: Semi-Supervised Synthetic-to-Real Cross-Domain Learning
for Object Classification in 3D Point Clouds [14.056949618464394]
Object classification using LiDAR 3D point cloud data is critical for modern applications such as autonomous driving.
We propose a semi-supervised cross-domain learning approach that does not rely on manual annotations of point clouds.
We introduce Point-Syn2Real, a new benchmark dataset for cross-domain learning on point clouds.
arXiv Detail & Related papers (2022-10-31T01:53:51Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - PointCutMix: Regularization Strategy for Point Cloud Classification [7.6904253666422395]
We propose a simple and effective augmentation method for the point cloud data, named PointCutMix.
It finds the optimal assignment between two point clouds and generates new training data by replacing the points in one sample with their optimal assigned pairs.
arXiv Detail & Related papers (2021-01-05T11:39:06Z) - Multi-scale Receptive Fields Graph Attention Network for Point Cloud
Classification [35.88116404702807]
The proposed MRFGAT architecture is tested on ModelNet10 and ModelNet40 datasets.
Results show it achieves state-of-the-art performance in shape classification tasks.
arXiv Detail & Related papers (2020-09-28T13:01:28Z) - SoftPoolNet: Shape Descriptor for Point Cloud Completion and
Classification [93.54286830844134]
We propose a method for 3D object completion and classification based on point clouds.
For the decoder stage, we propose regional convolutions, a novel operator aimed at maximizing the global activation entropy.
We evaluate our approach on different 3D tasks such as object completion and classification, achieving state-of-the-art accuracy.
arXiv Detail & Related papers (2020-08-17T14:32:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.