Doppler velocity-based algorithm for Clustering and Velocity Estimation
of moving objects
- URL: http://arxiv.org/abs/2112.12984v1
- Date: Fri, 24 Dec 2021 07:57:28 GMT
- Title: Doppler velocity-based algorithm for Clustering and Velocity Estimation
of moving objects
- Authors: Mian Guo, Kai Zhong, Xiaozhi Wang
- Abstract summary: We propose a Doppler velocity-based cluster and velocity estimation algorithm based on the characteristics of FMCW LiDAR.
We show that our algorithm can process at least a 4.5million points and estimate the velocity of 150 moving objects per second under the arithmetic power of the 3600x CPU.
- Score: 11.328509097182895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a Doppler velocity-based cluster and velocity estimation algorithm
based on the characteristics of FMCW LiDAR which achieves highly accurate,
single-scan, and real-time motion state detection and velocity estimation. We
prove the continuity of the Doppler velocity on the same object. Based on this
principle, we achieve the distinction between moving objects and stationary
background via region growing clustering algorithm. The obtained stationary
background will be used to estimate the velocity of the FMCW LiDAR by the
least-squares method. Then we estimate the velocity of the moving objects using
the estimated LiDAR velocity and the Doppler velocity of moving objects
obtained by clustering. To ensure real-time processing, we set the appropriate
least-squares parameters. Meanwhile, to verify the effectiveness of the
algorithm, we create the FMCW LiDAR model on the autonomous driving simulation
platform CARLA for spawning data. The results show that our algorithm can
process at least a 4.5million points and estimate the velocity of 150 moving
objects per second under the arithmetic power of the Ryzen 3600x CPU, with a
motion state detection accuracy of over 99% and estimated velocity accuracy of
0.1 m/s.
Related papers
- Planar Velocity Estimation for Fast-Moving Mobile Robots Using Event-Based Optical Flow [1.4447019135112429]
We introduce an approach to velocity estimation that is decoupled from wheel-to-surface traction assumptions.<n>The proposed method is evaluated through in-field experiments on a 1:10 scale autonomous racing platform.
arXiv Detail & Related papers (2025-05-16T11:00:33Z) - DRO: Doppler-Aware Direct Radar Odometry [11.042292216861762]
A renaissance in radar-based sensing for mobile robotic applications is underway.
We propose a novel SE(2) odometry approach for spinning frequency-modulated continuous-wave radars.
Our method has been validated on over 250km of on-road data sourced from public datasets.
arXiv Detail & Related papers (2025-04-29T01:20:30Z) - Full waveform inversion with CNN-based velocity representation extension [4.255346660147713]
Full waveform inversion (FWI) updates the velocity model by minimizing the discrepancy between observed and simulated data.
Discretization errors in numerical modeling and incomplete seismic data acquisition can introduce noise, which propagates through the adjoint operator.
We employ a convolutional neural network (CNN) to refine the velocity model before performing the forward simulation.
We use the same data misfit loss to update both the velocity and network parameters, thereby forming a self-supervised learning procedure.
arXiv Detail & Related papers (2025-04-22T12:14:38Z) - TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion [54.46664104437454]
We propose TacoDepth, an efficient and accurate Radar-Camera depth estimation model with one-stage fusion.
Specifically, the graph-based Radar structure extractor and the pyramid-based Radar fusion module are designed.
Compared with the previous state-of-the-art approach, TacoDepth improves depth accuracy and processing speed by 12.8% and 91.8%.
arXiv Detail & Related papers (2025-04-16T05:25:04Z) - Estimating Scene Flow in Robot Surroundings with Distributed Miniaturized Time-of-Flight Sensors [41.45395153490076]
We present an approach for scene flow estimation from low-density and noisy point clouds acquired from Time of Flight (ToF) sensors distributed on the robot body.
The proposed method clusters points from consecutive frames and applies Iterative Closest Point (ICP) to estimate a dense motion flow.
We employ a fitness-based classification to distinguish between stationary and moving points and an inlier removal strategy to refine geometric correspondences.
arXiv Detail & Related papers (2025-04-03T09:57:51Z) - Adaptive Multi-source Predictor for Zero-shot Video Object Segmentation [68.56443382421878]
We propose a novel adaptive multi-source predictor for zero-shot video object segmentation (ZVOS)
In the static object predictor, the RGB source is converted to depth and static saliency sources, simultaneously.
Experiments show that the proposed model outperforms the state-of-the-art methods on three challenging ZVOS benchmarks.
arXiv Detail & Related papers (2023-03-18T10:19:29Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Lidar with Velocity: Motion Distortion Correction of Point Clouds from
Oscillating Scanning Lidars [5.285472406047901]
Lidar point cloud distortion from moving object is an important problem in autonomous driving.
Gustafson-based lidar and camera fusion is proposed to estimate the full velocity and correct the lidar distortion.
The framework is evaluated on real road data and the fusion method outperforms the traditional ICP-based and point-cloud only method.
arXiv Detail & Related papers (2021-11-18T03:13:08Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - A Framework for Real-time Traffic Trajectory Tracking, Speed Estimation,
and Driver Behavior Calibration at Urban Intersections Using Virtual Traffic
Lanes [5.735035463793008]
We present a case study incorporating the highly accurate trajectories and movement classification obtained via VT-Lane.
We use a highly instrumented vehicle to verify the estimated speeds obtained from video inference.
We then use the estimated speeds to calibrate the parameters of a driver behavior model for the vehicles in the area of study.
arXiv Detail & Related papers (2021-06-18T06:15:53Z) - Object Tracking by Detection with Visual and Motion Cues [1.7818230914983044]
Self-driving cars need to detect and track objects in camera images.
We present a simple online tracking algorithm that is based on a constant velocity motion model with a Kalman filter.
We evaluate our approach on the challenging BDD100 dataset.
arXiv Detail & Related papers (2021-01-19T10:29:16Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z) - Detection of 3D Bounding Boxes of Vehicles Using Perspective
Transformation for Accurate Speed Measurement [3.8073142980733]
We present an improved version of our algorithm for detection of 3D bounding boxes of vehicles captured by traffic surveillance cameras.
Our algorithm utilizes the known geometry of vanishing points in the surveilled scene to construct a perspective transformation.
Compared to other published state-of-the-art fully automatic results our algorithm reduces the mean absolute speed measurement error by 32% (1.10 km/h to 0.75 km/h) and the absolute median error by 40% (0.97 km/h to 0.58 km/h)
arXiv Detail & Related papers (2020-03-29T21:01:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.