Nearest Neighbors Meet Deep Neural Networks for Point Cloud Analysis
- URL: http://arxiv.org/abs/2303.00703v1
- Date: Wed, 1 Mar 2023 17:57:09 GMT
- Title: Nearest Neighbors Meet Deep Neural Networks for Point Cloud Analysis
- Authors: Renrui Zhang, Liuhui Wang, Ziyu Guo, Jianbo Shi
- Abstract summary: We present an alternative to enhance existing deep neural networks without redesigning or extra parameters, termed as Spatial-Neighbor Adapter (SN-Adapter)
Building on any trained 3D network, we utilize its learned encoding capability to extract features of the training dataset and summarize them as spatial knowledge.
For a test point cloud, the SN-Adapter retrieves k nearest neighbors (k-NN) from the pre-constructed spatial prototypes and linearly interpolates the k-NN prediction with prototypical that of the original 3D network.
- Score: 14.844183458784235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Performances on standard 3D point cloud benchmarks have plateaued, resulting
in oversized models and complex network design to make a fractional
improvement. We present an alternative to enhance existing deep neural networks
without any redesigning or extra parameters, termed as Spatial-Neighbor Adapter
(SN-Adapter). Building on any trained 3D network, we utilize its learned
encoding capability to extract features of the training dataset and summarize
them as prototypical spatial knowledge. For a test point cloud, the SN-Adapter
retrieves k nearest neighbors (k-NN) from the pre-constructed spatial
prototypes and linearly interpolates the k-NN prediction with that of the
original 3D network. By providing complementary characteristics, the proposed
SN-Adapter serves as a plug-and-play module to economically improve performance
in a non-parametric manner. More importantly, our SN-Adapter can be effectively
generalized to various 3D tasks, including shape classification, part
segmentation, and 3D object detection, demonstrating its superiority and
robustness. We hope our approach could show a new perspective for point cloud
analysis and facilitate future research.
Related papers
- PointeNet: A Lightweight Framework for Effective and Efficient Point
Cloud Analysis [28.54939134635978]
PointeNet is a network designed specifically for point cloud analysis.
Our method demonstrates flexibility by seamlessly integrating with a classification/segmentation head or embedding into off-the-shelf 3D object detection networks.
Experiments on object-level datasets, including ModelNet40, ScanObjectNN, ShapeNet KITTI, and the scene-level dataset KITTI, demonstrate the superior performance of PointeNet over state-of-the-art methods in point cloud analysis.
arXiv Detail & Related papers (2023-12-20T03:34:48Z) - Parameter is Not All You Need: Starting from Non-Parametric Networks for
3D Point Cloud Analysis [51.0695452455959]
We present a Non-parametric Network for 3D point cloud analysis, Point-NN, which consists of purely non-learnable components.
Surprisingly, it performs well on various 3D tasks, requiring no parameters or training, and even surpasses existing fully trained models.
arXiv Detail & Related papers (2023-03-14T17:59:02Z) - Self-Supervised Learning with Multi-View Rendering for 3D Point Cloud
Analysis [33.31864436614945]
We propose a novel pre-training method for 3D point cloud models.
Our pre-training is self-supervised by a local pixel/point level correspondence loss and a global image/point cloud level loss.
These improved models outperform existing state-of-the-art methods on various datasets and downstream tasks.
arXiv Detail & Related papers (2022-10-28T05:23:03Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - PnP-3D: A Plug-and-Play for 3D Point Clouds [38.05362492645094]
We propose a plug-and-play module, -3D, to improve the effectiveness of existing networks in analyzing point cloud data.
To thoroughly evaluate our approach, we conduct experiments on three standard point cloud analysis tasks.
In addition to achieving state-of-the-art results, we present comprehensive studies to demonstrate our approach's advantages.
arXiv Detail & Related papers (2021-08-16T23:59:43Z) - Spherical Interpolated Convolutional Network with Distance-Feature
Density for 3D Semantic Segmentation of Point Clouds [24.85151376535356]
Spherical interpolated convolution operator is proposed to replace the traditional grid-shaped 3D convolution operator.
The proposed method achieves good performance on the ScanNet dataset and Paris-Lille-3D dataset.
arXiv Detail & Related papers (2020-11-27T15:35:12Z) - Local Grid Rendering Networks for 3D Object Detection in Point Clouds [98.02655863113154]
CNNs are powerful but it would be computationally costly to directly apply convolutions on point data after voxelizing the entire point clouds to a dense regular 3D grid.
We propose a novel and principled Local Grid Rendering (LGR) operation to render the small neighborhood of a subset of input points into a low-resolution 3D grid independently.
We validate LGR-Net for 3D object detection on the challenging ScanNet and SUN RGB-D datasets.
arXiv Detail & Related papers (2020-07-04T13:57:43Z) - Airborne LiDAR Point Cloud Classification with Graph Attention
Convolution Neural Network [5.69168146446103]
We present a graph attention convolution neural network (GACNN) that can be directly applied to the classification of unstructured 3D point clouds obtained by airborne LiDAR.
Based on the proposed graph attention convolution module, we further design an end-to-end encoder-decoder network, named GACNN, to capture multiscale features of the point clouds.
Experiments on the ISPRS 3D labeling dataset show that the proposed model achieves a new state-of-the-art performance in terms of average F1 score (71.5%) and a satisfying overall accuracy (83.2%)
arXiv Detail & Related papers (2020-04-20T05:12:31Z) - PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection [76.30585706811993]
We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN)
Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction.
It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks.
arXiv Detail & Related papers (2019-12-31T06:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.