Lidar with Velocity: Motion Distortion Correction of Point Clouds from
Oscillating Scanning Lidars
- URL: http://arxiv.org/abs/2111.09497v1
- Date: Thu, 18 Nov 2021 03:13:08 GMT
- Title: Lidar with Velocity: Motion Distortion Correction of Point Clouds from
Oscillating Scanning Lidars
- Authors: Wen Yang, Zheng Gong, Baifu Huang and Xiaoping Hong
- Abstract summary: Lidar point cloud distortion from moving object is an important problem in autonomous driving.
Gustafson-based lidar and camera fusion is proposed to estimate the full velocity and correct the lidar distortion.
The framework is evaluated on real road data and the fusion method outperforms the traditional ICP-based and point-cloud only method.
- Score: 5.285472406047901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lidar point cloud distortion from moving object is an important problem in
autonomous driving, and recently becomes even more demanding with the emerging
of newer lidars, which feature back-and-forth scanning patterns. Accurately
estimating moving object velocity would not only provide a tracking capability
but also correct the point cloud distortion with more accurate description of
the moving object. Since lidar measures the time-of-flight distance but with a
sparse angular resolution, the measurement is precise in the radial measurement
but lacks angularly. Camera on the other hand provides a dense angular
resolution. In this paper, Gaussian-based lidar and camera fusion is proposed
to estimate the full velocity and correct the lidar distortion. A probabilistic
Kalman-filter framework is provided to track the moving objects, estimate their
velocities and simultaneously correct the point clouds distortions. The
framework is evaluated on real road data and the fusion method outperforms the
traditional ICP-based and point-cloud only method. The complete working
framework is open-sourced
(https://github.com/ISEE-Technology/lidar-with-velocity) to accelerate the
adoption of the emerging lidars.
Related papers
- Image as an IMU: Estimating Camera Motion from a Single Motion-Blurred Image [14.485182089870928]
We propose a novel framework that leverages motion blur as a rich cue for motion estimation.
Our approach works by predicting a dense motion flow field and a monocular depth map directly from a single motion-blurred image.
Our method produces an IMU-like measurement that robustly captures fast and aggressive camera movements.
arXiv Detail & Related papers (2025-03-21T17:58:56Z) - Event-Based Tracking Any Point with Motion-Augmented Temporal Consistency [58.719310295870024]
This paper presents an event-based framework for tracking any point.
It tackles the challenges posed by spatial sparsity and motion sensitivity in events.
It achieves 150% faster processing with competitive model parameters.
arXiv Detail & Related papers (2024-12-02T09:13:29Z) - StraightPCF: Straight Point Cloud Filtering [50.66412286723848]
Point cloud filtering is a fundamental 3D vision task, which aims to remove noise while recovering the underlying clean surfaces.
We introduce StraightPCF, a new deep learning based method for point cloud filtering.
It works by moving noisy points along straight paths, thus reducing discretization errors while ensuring faster convergence to the clean surfaces.
arXiv Detail & Related papers (2024-05-14T05:41:59Z) - Spatio-Temporal Bi-directional Cross-frame Memory for Distractor Filtering Point Cloud Single Object Tracking [2.487142846438629]
3 single object tracking within LIDAR point is pivotal task in computer vision.
Existing methods, which depend solely on appearance matching via networks or utilize information from successive frames, encounter significant challenges.
We design an innovative cross-frame bi-temporal motion tracker, named STMD-Tracker, to mitigate these challenges.
arXiv Detail & Related papers (2024-03-23T13:15:44Z) - A 5-Point Minimal Solver for Event Camera Relative Motion Estimation [47.45081895021988]
We introduce a novel minimal 5-point solver that estimates line parameters and linear camera velocity projections, which can be fused into a single, averaged linear velocity when considering multiple lines.
Our method consistently achieves a 100% success rate in estimating linear velocity where existing closed-form solvers only achieve between 23% and 70%.
arXiv Detail & Related papers (2023-09-29T08:30:18Z) - Aligning Bird-Eye View Representation of Point Cloud Sequences using
Scene Flow [0.0]
Low-resolution point clouds are challenging for object detection methods due to their sparsity.
We develop a plug-in module that enables single-frame detectors to compute scene flow to rectify their Bird-Eye View representation.
arXiv Detail & Related papers (2023-05-04T15:16:21Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - FMODetect: Robust Detection and Trajectory Estimation of Fast Moving
Objects [110.29738581961955]
We propose the first learning-based approach for detection and trajectory estimation of fast moving objects.
The proposed method first detects all fast moving objects as a truncated distance function to the trajectory.
For the sharp appearance estimation, we propose an energy minimization based deblurring.
arXiv Detail & Related papers (2020-12-15T11:05:34Z) - DroTrack: High-speed Drone-based Object Tracking Under Uncertainty [0.23204178451683263]
DroTrack is a high-speed visual single-object tracking framework for drone-captured video sequences.
We implement an effective object segmentation based on Fuzzy C Means.
We also leverage the geometrical angular motion to estimate a reliable object scale.
arXiv Detail & Related papers (2020-05-02T13:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.