Robust Self-Supervised LiDAR Odometry via Representative Structure
Discovery and 3D Inherent Error Modeling
- URL: http://arxiv.org/abs/2202.13353v1
- Date: Sun, 27 Feb 2022 12:52:27 GMT
- Title: Robust Self-Supervised LiDAR Odometry via Representative Structure
Discovery and 3D Inherent Error Modeling
- Authors: Yan Xu, Junyi Lin, Jianping Shi, Guofeng Zhang, Xiaogang Wang,
Hongsheng Li
- Abstract summary: We develop a two-stage odometry estimation network, where we obtain the ego-motion by estimating a set of sub-region transformations.
In this paper, we aim to alleviate the influence of unreliable structures in training, inference and mapping phases.
Our two-frame odometry outperforms the previous state of the arts by 16%/12% in terms of translational/rotational errors.
- Score: 67.75095378830694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The correct ego-motion estimation basically relies on the understanding of
correspondences between adjacent LiDAR scans. However, given the complex
scenarios and the low-resolution LiDAR, finding reliable structures for
identifying correspondences can be challenging. In this paper, we delve into
structure reliability for accurate self-supervised ego-motion estimation and
aim to alleviate the influence of unreliable structures in training, inference
and mapping phases. We improve the self-supervised LiDAR odometry substantially
from three aspects: 1) A two-stage odometry estimation network is developed,
where we obtain the ego-motion by estimating a set of sub-region
transformations and averaging them with a motion voting mechanism, to encourage
the network focusing on representative structures. 2) The inherent alignment
errors, which cannot be eliminated via ego-motion optimization, are
down-weighted in losses based on the 3D point covariance estimations. 3) The
discovered representative structures and learned point covariances are
incorporated in the mapping module to improve the robustness of map
construction. Our two-frame odometry outperforms the previous state of the arts
by 16%/12% in terms of translational/rotational errors on the KITTI dataset and
performs consistently well on the Apollo-Southbay datasets. We can even rival
the fully supervised counterparts with our mapping module and more unlabeled
training data.
Related papers
- DELO: Deep Evidential LiDAR Odometry using Partial Optimal Transport [23.189529003370303]
Real-time LiDAR-based odometry is imperative for many applications like robot navigation, globally consistent 3D scene map reconstruction, or safe motion-planning.
We introduce a novel deep learning-based real-time (approx. 35-40ms per frame) LO method that jointly learns accurate frame-to-frame correspondences and model's predictive uncertainty (PU) as evidence to safe-guard LO predictions.
We evaluate our method on KITTI dataset and show competitive performance, even superior generalization ability over recent state-of-the-art approaches.
arXiv Detail & Related papers (2023-08-14T14:06:21Z) - Weakly-supervised 3D Pose Transfer with Keypoints [57.66991032263699]
Main challenges of 3D pose transfer are: 1) Lack of paired training data with different characters performing the same pose; 2) Disentangling pose and shape information from the target mesh; 3) Difficulty in applying to meshes with different topologies.
We propose a novel weakly-supervised keypoint-based framework to overcome these difficulties.
arXiv Detail & Related papers (2023-07-25T12:40:24Z) - Intensity Profile Projection: A Framework for Continuous-Time
Representation Learning for Dynamic Networks [50.2033914945157]
We present a representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
The framework consists of three stages: estimating pairwise intensity functions, learning a projection which minimises a notion of intensity reconstruction error.
Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses.
arXiv Detail & Related papers (2023-06-09T15:38:25Z) - Coordinated Transformer with Position \& Sample-aware Central Loss for
Anatomical Landmark Detection [6.004522909994631]
Heatmap-based anatomical landmark detection is still facing two unresolved challenges.
We propose a novel position-aware and sample-aware central loss.
A Coordinated Transformer, called CoorTransformer, is proposed to address the challenge of ignoring structure information.
arXiv Detail & Related papers (2023-05-18T23:05:01Z) - Poses as Queries: Image-to-LiDAR Map Localization with Transformers [5.704968411509063]
High-precision vehicle localization with commercial setups is a crucial technique for high-level autonomous driving tasks.
Estimate pose by finding correspondences between such cross-modal sensor data is challenging.
We propose a novel Transformer-based neural network to register 2D images into 3D LiDAR map in an end-to-end manner.
arXiv Detail & Related papers (2023-05-07T14:57:58Z) - Benchmarking the Robustness of LiDAR Semantic Segmentation Models [78.6597530416523]
In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions.
We propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy.
We design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications.
arXiv Detail & Related papers (2023-01-03T06:47:31Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.