PnP-3D: A Plug-and-Play for 3D Point Clouds
- URL: http://arxiv.org/abs/2108.07378v1
- Date: Mon, 16 Aug 2021 23:59:43 GMT
- Title: PnP-3D: A Plug-and-Play for 3D Point Clouds
- Authors: Shi Qiu, Saeed Anwar, Nick Barnes
- Abstract summary: We propose a plug-and-play module, -3D, to improve the effectiveness of existing networks in analyzing point cloud data.
To thoroughly evaluate our approach, we conduct experiments on three standard point cloud analysis tasks.
In addition to achieving state-of-the-art results, we present comprehensive studies to demonstrate our approach's advantages.
- Score: 38.05362492645094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the help of the deep learning paradigm, many point cloud networks have
been invented for visual analysis. However, there is great potential for
development of these networks since the given information of point cloud data
has not been fully exploited. To improve the effectiveness of existing networks
in analyzing point cloud data, we propose a plug-and-play module, PnP-3D,
aiming to refine the fundamental point cloud feature representations by
involving more local context and global bilinear response from explicit 3D
space and implicit feature space. To thoroughly evaluate our approach, we
conduct experiments on three standard point cloud analysis tasks, including
classification, semantic segmentation, and object detection, where we select
three state-of-the-art networks from each task for evaluation. Serving as a
plug-and-play module, PnP-3D can significantly boost the performances of
established networks. In addition to achieving state-of-the-art results on four
widely used point cloud benchmarks, we present comprehensive ablation studies
and visualizations to demonstrate our approach's advantages. The code will be
available at https://github.com/ShiQiu0419/pnp-3d.
Related papers
- Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Point2Vec for Self-Supervised Representation Learning on Point Clouds [66.53955515020053]
We extend data2vec to the point cloud domain and report encouraging results on several downstream tasks.
We propose point2vec, which unleashes the full potential of data2vec-like pre-training on point clouds.
arXiv Detail & Related papers (2023-03-29T10:08:29Z) - Nearest Neighbors Meet Deep Neural Networks for Point Cloud Analysis [14.844183458784235]
We present an alternative to enhance existing deep neural networks without redesigning or extra parameters, termed as Spatial-Neighbor Adapter (SN-Adapter)
Building on any trained 3D network, we utilize its learned encoding capability to extract features of the training dataset and summarize them as spatial knowledge.
For a test point cloud, the SN-Adapter retrieves k nearest neighbors (k-NN) from the pre-constructed spatial prototypes and linearly interpolates the k-NN prediction with prototypical that of the original 3D network.
arXiv Detail & Related papers (2023-03-01T17:57:09Z) - Point-Syn2Real: Semi-Supervised Synthetic-to-Real Cross-Domain Learning
for Object Classification in 3D Point Clouds [14.056949618464394]
Object classification using LiDAR 3D point cloud data is critical for modern applications such as autonomous driving.
We propose a semi-supervised cross-domain learning approach that does not rely on manual annotations of point clouds.
We introduce Point-Syn2Real, a new benchmark dataset for cross-domain learning on point clouds.
arXiv Detail & Related papers (2022-10-31T01:53:51Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Voint Cloud: Multi-View Point Cloud Representation for 3D Understanding [80.04281842702294]
We introduce the concept of the multi-view point cloud (Voint cloud) representing each 3D point as a set of features extracted from several view-points.
This novel 3D Voint cloud representation combines the compactness of 3D point cloud representation with the natural view-awareness of multi-view representation.
We deploy a Voint neural network (VointNet) with a theoretically established functional form to learn representations in the Voint space.
arXiv Detail & Related papers (2021-11-30T13:08:19Z) - TreeGCN-ED: Encoding Point Cloud using a Tree-Structured Graph Network [24.299931323012757]
This work proposes an autoencoder based framework to generate robust embeddings for point clouds.
We demonstrate the applicability of the proposed framework in applications like: 3D point cloud completion and Single image based 3D reconstruction.
arXiv Detail & Related papers (2021-10-07T03:52:56Z) - Point Discriminative Learning for Unsupervised Representation Learning
on 3D Point Clouds [54.31515001741987]
We propose a point discriminative learning method for unsupervised representation learning on 3D point clouds.
We achieve this by imposing a novel point discrimination loss on the middle level and global level point features.
Our method learns powerful representations and achieves new state-of-the-art performance.
arXiv Detail & Related papers (2021-08-04T15:11:48Z) - Semantic Segmentation for Real Point Cloud Scenes via Bilateral
Augmentation and Adaptive Fusion [38.05362492645094]
Real point cloud scenes can intuitively capture complex surroundings in the real world, but due to 3D data's raw nature, it is very challenging for machine perception.
We concentrate on the essential visual task, semantic segmentation, for large-scale point cloud data collected in reality.
By comparing with state-of-the-art networks on three different benchmarks, we demonstrate the effectiveness of our network.
arXiv Detail & Related papers (2021-03-12T04:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.