GPCO: An Unsupervised Green Point Cloud Odometry Method
- URL: http://arxiv.org/abs/2112.04054v1
- Date: Wed, 8 Dec 2021 00:24:03 GMT
- Title: GPCO: An Unsupervised Green Point Cloud Odometry Method
- Authors: Pranav Kadam, Min Zhang, Shan Liu, C.-C. Jay Kuo
- Abstract summary: A lightweight point cloud odometry solution is proposed and named the green point cloud odometry (GPCO) method.
GPCO is an unsupervised learning method that predicts object motion by matching features of consecutive point cloud scans.
It is observed that GPCO outperforms benchmarking deep learning methods in accuracy while it has a significantly smaller model size and less training time.
- Score: 64.86292006892093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual odometry aims to track the incremental motion of an object using the
information captured by visual sensors. In this work, we study the point cloud
odometry problem, where only the point cloud scans obtained by the LiDAR (Light
Detection And Ranging) are used to estimate object's motion trajectory. A
lightweight point cloud odometry solution is proposed and named the green point
cloud odometry (GPCO) method. GPCO is an unsupervised learning method that
predicts object motion by matching features of consecutive point cloud scans.
It consists of three steps. First, a geometry-aware point sampling scheme is
used to select discriminant points from the large point cloud. Second, the view
is partitioned into four regions surrounding the object, and the PointHop++
method is used to extract point features. Third, point correspondences are
established to estimate object motion between two consecutive scans.
Experiments on the KITTI dataset are conducted to demonstrate the effectiveness
of the GPCO method. It is observed that GPCO outperforms benchmarking deep
learning methods in accuracy while it has a significantly smaller model size
and less training time.
Related papers
- Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Point2Point : A Framework for Efficient Deep Learning on Hilbert sorted
Point Clouds with applications in Spatio-Temporal Occupancy Prediction [0.0]
We propose a novel approach to representing point clouds as a locality preserving 1D ordering induced by the Hilbert space-filling curve.
We also introduce Point2Point, a neural architecture that can effectively learn on Hilbert-sorted point clouds.
arXiv Detail & Related papers (2023-06-28T15:30:08Z) - GeoMAE: Masked Geometric Target Prediction for Self-supervised Point
Cloud Pre-Training [16.825524577372473]
We introduce a point cloud representation learning framework, based on geometric feature reconstruction.
We identify three self-supervised learning objectives to peculiar point clouds, namely centroid prediction, normal estimation, and curvature prediction.
Our pipeline is conceptually simple and it consists of two major steps: first, it randomly masks out groups of points, followed by a Transformer-based point cloud encoder.
arXiv Detail & Related papers (2023-05-15T17:14:55Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - PointFlowHop: Green and Interpretable Scene Flow Estimation from
Consecutive Point Clouds [49.7285297470392]
An efficient 3D scene flow estimation method called PointFlowHop is proposed in this work.
PointFlowHop takes two consecutive point clouds and determines the 3D flow vectors for every point in the first point cloud.
It decomposes the scene flow estimation task into a set of subtasks, including ego-motion compensation, object association and object-wise motion estimation.
arXiv Detail & Related papers (2023-02-27T23:06:01Z) - PCRP: Unsupervised Point Cloud Object Retrieval and Pose Estimation [50.3020332934185]
An unsupervised point cloud object retrieval and pose estimation method, called PCRP, is proposed in this work.
Experiments on the ModelNet40 dataset demonstrate the superior performance of PCRP in comparison with traditional and learning based methods.
arXiv Detail & Related papers (2022-02-16T03:37:43Z) - Attribute Artifacts Removal for Geometry-based Point Cloud Compression [43.60640890971367]
Geometry-based point cloud compression (G-PCC) can achieve remarkable compression efficiency for point clouds.
It still leads to serious attribute compression artifacts, especially under low scenarios.
We propose a Multi-Scale Graph Attention Network (MSGAT) to remove the artifacts of point cloud attributes.
arXiv Detail & Related papers (2021-12-01T15:21:06Z) - PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features [91.2054994193218]
We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
arXiv Detail & Related papers (2021-02-24T06:44:09Z) - Robust Kernel-based Feature Representation for 3D Point Cloud Analysis
via Circular Graph Convolutional Network [2.42919716430661]
We present a new local feature description method that is robust to rotation, density, and scale variations.
To improve representations of the local descriptors, we propose a global aggregation method.
Our method shows superior performances when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-12-22T18:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.