ODFNet: Using orientation distribution functions to characterize 3D
point clouds
- URL: http://arxiv.org/abs/2012.04708v1
- Date: Tue, 8 Dec 2020 19:54:20 GMT
- Title: ODFNet: Using orientation distribution functions to characterize 3D
point clouds
- Authors: Yusuf H. Sahin, Alican Mertan, Gozde Unal
- Abstract summary: We leverage on point orientation distributions around a point in order to obtain an expressive local neighborhood representation for point clouds.
New ODFNet model achieves state-of-the-art accuracy for object classification on ModelNet40 and ScanObjectNN datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning new representations of 3D point clouds is an active research area in
3D vision, as the order-invariant point cloud structure still presents
challenges to the design of neural network architectures. Recent works explored
learning either global or local features or both for point clouds, however none
of the earlier methods focused on capturing contextual shape information by
analysing local orientation distribution of points. In this paper, we leverage
on point orientation distributions around a point in order to obtain an
expressive local neighborhood representation for point clouds. We achieve this
by dividing the spherical neighborhood of a given point into predefined cone
volumes, and statistics inside each volume are used as point features. In this
way, a local patch can be represented by not only the selected point's nearest
neighbors, but also considering a point density distribution defined along
multiple orientations around the point. We are then able to construct an
orientation distribution function (ODF) neural network that involves an
ODFBlock which relies on mlp (multi-layer perceptron) layers. The new ODFNet
model achieves state-of the-art accuracy for object classification on
ModelNet40 and ScanObjectNN datasets, and segmentation on ShapeNet S3DIS
datasets.
Related papers
- Mini-PointNetPlus: a local feature descriptor in deep learning model for
3d environment perception [7.304195370862869]
We propose a novel local feature descriptor, mini-PointNetPlus, as an alternative for plug-and-play to PointNet.
Our basic idea is to separately project the data points to the individual features considered, each leading to a permutation invariant.
Due to fully utilizing the features by the proposed descriptor, we demonstrate in experiment a considerable performance improvement for 3D perception.
arXiv Detail & Related papers (2023-07-25T07:30:28Z) - Object Detection in 3D Point Clouds via Local Correlation-Aware Point
Embedding [0.0]
We present an improved approach for 3D object detection in point cloud data based on the Frustum PointNet (F-PointNet)
Compared to the original F-PointNet, our newly proposed method considers the point neighborhood when computing point features.
arXiv Detail & Related papers (2023-01-11T18:14:47Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - CpT: Convolutional Point Transformer for 3D Point Cloud Processing [10.389972581905]
We present CpT: Convolutional point Transformer - a novel deep learning architecture for dealing with the unstructured nature of 3D point cloud data.
CpT is an improvement over existing attention-based Convolutions Neural Networks as well as previous 3D point cloud processing transformers.
Our model can serve as an effective backbone for various point cloud processing tasks when compared to the existing state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-21T17:45:55Z) - Learning point embedding for 3D data processing [2.12121796606941]
Current point-based methods are essentially spatial relationship processing networks.
Our architecture, PE-Net, learns the representation of point clouds in high-dimensional space.
Experiments show that PE-Net achieves the state-of-the-art performance in multiple challenging datasets.
arXiv Detail & Related papers (2021-07-19T00:25:28Z) - Spherical Interpolated Convolutional Network with Distance-Feature
Density for 3D Semantic Segmentation of Point Clouds [24.85151376535356]
Spherical interpolated convolution operator is proposed to replace the traditional grid-shaped 3D convolution operator.
The proposed method achieves good performance on the ScanNet dataset and Paris-Lille-3D dataset.
arXiv Detail & Related papers (2020-11-27T15:35:12Z) - Deep Positional and Relational Feature Learning for Rotation-Invariant
Point Cloud Analysis [107.9979381402172]
We propose a rotation-invariant deep network for point clouds analysis.
The network is hierarchical and relies on two modules: a positional feature embedding block and a relational feature embedding block.
Experiments show state-of-the-art classification and segmentation performances on benchmark datasets.
arXiv Detail & Related papers (2020-11-18T04:16:51Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z) - DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization [56.15308829924527]
We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
arXiv Detail & Related papers (2020-07-17T20:21:22Z) - PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection [76.30585706811993]
We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN)
Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction.
It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks.
arXiv Detail & Related papers (2019-12-31T06:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.