CAE-LO: LiDAR Odometry Leveraging Fully Unsupervised Convolutional
Auto-Encoder for Interest Point Detection and Feature Description
- URL: http://arxiv.org/abs/2001.01354v3
- Date: Sat, 31 Oct 2020 01:22:44 GMT
- Title: CAE-LO: LiDAR Odometry Leveraging Fully Unsupervised Convolutional
Auto-Encoder for Interest Point Detection and Feature Description
- Authors: Deyu Yin, Qian Zhang, Jingbin Liu, Xinlian Liang, Yunsheng Wang, Jyri
Maanp\"a\"a, Hao Ma, Juha Hyypp\"a, and Ruizhi Chen
- Abstract summary: We propose a fully unsupervised Conal Auto-Encoder based LiDAR Odometry (CAE-LO) that detects interest points from spherical ring data using 2D CAE and extracts features from multi-resolution voxel model using 3D CAE.
We make several key contributions: 1) experiments based on KITTI dataset show that our interest points can capture more local details to improve the matching success rate on unstructured scenarios and our features outperform state-of-the-art by more than 50% in matching inlier ratio.
- Score: 10.73965992177754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an important technology in 3D mapping, autonomous driving, and robot
navigation, LiDAR odometry is still a challenging task. Appropriate data
structure and unsupervised deep learning are the keys to achieve an easy
adjusted LiDAR odometry solution with high performance. Utilizing compact 2D
structured spherical ring projection model and voxel model which preserves the
original shape of input data, we propose a fully unsupervised Convolutional
Auto-Encoder based LiDAR Odometry (CAE-LO) that detects interest points from
spherical ring data using 2D CAE and extracts features from multi-resolution
voxel model using 3D CAE. We make several key contributions: 1) experiments
based on KITTI dataset show that our interest points can capture more local
details to improve the matching success rate on unstructured scenarios and our
features outperform state-of-the-art by more than 50% in matching inlier ratio;
2) besides, we also propose a keyframe selection method based on matching pairs
transferring, an odometry refinement method for keyframes based on extended
interest points from spherical rings, and a backward pose update method. The
odometry refinement experiments verify the proposed ideas' feasibility and
effectiveness.
Related papers
- PVAFN: Point-Voxel Attention Fusion Network with Multi-Pooling Enhancing for 3D Object Detection [59.355022416218624]
integration of point and voxel representations is becoming more common in LiDAR-based 3D object detection.
We propose a novel two-stage 3D object detector, called Point-Voxel Attention Fusion Network (PVAFN)
PVAFN uses a multi-pooling strategy to integrate both multi-scale and region-specific information effectively.
arXiv Detail & Related papers (2024-08-26T19:43:01Z) - KP-RED: Exploiting Semantic Keypoints for Joint 3D Shape Retrieval and Deformation [87.23575166061413]
KP-RED is a unified KeyPoint-driven REtrieval and Deformation framework.
It takes object scans as input and jointly retrieves and deforms the most geometrically similar CAD models.
arXiv Detail & Related papers (2024-03-15T08:44:56Z) - PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR
Point Clouds [29.15589024703907]
In this paper, we revisit the local point aggregators from the perspective of allocating computational resources.
We find that the simplest pillar based models perform surprisingly well considering both accuracy and latency.
Our results challenge the common intuition that the detailed geometry modeling is essential to achieve high performance for 3D object detection.
arXiv Detail & Related papers (2023-05-08T17:59:14Z) - SASA: Semantics-Augmented Set Abstraction for Point-based 3D Object
Detection [78.90102636266276]
We propose a novel set abstraction method named Semantics-Augmented Set Abstraction (SASA)
Based on the estimated point-wise foreground scores, we then propose a semantics-guided point sampling algorithm to help retain more important foreground points during down-sampling.
In practice, SASA shows to be effective in identifying valuable points related to foreground objects and improving feature learning for point-based 3D detection.
arXiv Detail & Related papers (2022-01-06T08:54:47Z) - Efficient 3D Deep LiDAR Odometry [16.388259779644553]
An efficient 3D point cloud learning architecture, named PWCLO-Net, is first proposed in this paper.
The entire architecture is holistically optimized end-to-end to achieve adaptive learning of cost volume and mask.
arXiv Detail & Related papers (2021-11-03T11:09:49Z) - SA-Det3D: Self-Attention Based Context-Aware 3D Object Detection [9.924083358178239]
We propose two variants of self-attention for contextual modeling in 3D object detection.
We first incorporate the pairwise self-attention mechanism into the current state-of-the-art BEV, voxel and point-based detectors.
Next, we propose a self-attention variant that samples a subset of the most representative features by learning deformations over randomly sampled locations.
arXiv Detail & Related papers (2021-01-07T18:30:32Z) - Multi-View Adaptive Fusion Network for 3D Object Detection [14.506796247331584]
3D object detection based on LiDAR-camera fusion is becoming an emerging research theme for autonomous driving.
We propose a single-stage multi-view fusion framework that takes LiDAR bird's-eye view, LiDAR range view and camera view images as inputs for 3D object detection.
We design an end-to-end learnable network named MVAF-Net to integrate these two components.
arXiv Detail & Related papers (2020-11-02T00:06:01Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - LodoNet: A Deep Neural Network with 2D Keypoint Matchingfor 3D LiDAR
Odometry Estimation [22.664095688406412]
We propose to transfer the LiDAR frames to image space and reformulate the problem as image feature extraction.
With the help of scale-invariant feature transform (SIFT) for feature extraction, we are able to generate matched keypoint pairs (MKPs)
A convolutional neural network pipeline is designed for LiDAR odometry estimation by extracted MKPs.
The proposed scheme, namely LodoNet, is then evaluated in the KITTI odometry estimation benchmark, achieving on par with or even better results than the state-of-the-art.
arXiv Detail & Related papers (2020-09-01T01:09:41Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic
Information Modeling [65.47126868838836]
We propose a novel 3D object detection framework with dynamic information modeling.
Coarse predictions are generated in the first stage via a voxel-based region proposal network.
Experiments are conducted on the large-scale nuScenes 3D detection benchmark.
arXiv Detail & Related papers (2020-07-16T18:27:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.