CurbScan: Curb Detection and Tracking Using Multi-Sensor Fusion
- URL: http://arxiv.org/abs/2010.04837v2
- Date: Tue, 13 Oct 2020 00:28:21 GMT
- Title: CurbScan: Curb Detection and Tracking Using Multi-Sensor Fusion
- Authors: Iljoo Baek, Tzu-Chieh Tai, Manoj Bhat, Karun Ellango, Tarang Shah,
Kamal Fuseini, Ragunathan (Raj) Rajkumar
- Abstract summary: Curb detection and tracking are useful in vehicle localization and path planning.
We propose an approach to detect and track curbs by fusing together data from multiple sensors.
Our algorithm maintains over 90% accuracy within 4.5-22 meters and 0-14 meters for the KITTI dataset and our dataset respectively.
- Score: 0.8722958995761769
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reliable curb detection is critical for safe autonomous driving in urban
contexts. Curb detection and tracking are also useful in vehicle localization
and path planning. Past work utilized a 3D LiDAR sensor to determine accurate
distance information and the geometric attributes of curbs. However, such an
approach requires dense point cloud data and is also vulnerable to false
positives from obstacles present on both road and off-road areas. In this
paper, we propose an approach to detect and track curbs by fusing together data
from multiple sensors: sparse LiDAR data, a mono camera and low-cost ultrasonic
sensors. The detection algorithm is based on a single 3D LiDAR and a mono
camera sensor used to detect candidate curb features and it effectively removes
false positives arising from surrounding static and moving obstacles. The
detection accuracy of the tracking algorithm is boosted by using Kalman
filter-based prediction and fusion with lateral distance information from
low-cost ultrasonic sensors. We next propose a line-fitting algorithm that
yields robust results for curb locations. Finally, we demonstrate the practical
feasibility of our solution by testing in different road environments and
evaluating our implementation in a real vehicle\footnote{Demo video clips
demonstrating our algorithm have been uploaded to Youtube:
https://www.youtube.com/watch?v=w5MwsdWhcy4,
https://www.youtube.com/watch?v=Gd506RklfG8.}. Our algorithm maintains over
90\% accuracy within 4.5-22 meters and 0-14 meters for the KITTI dataset and
our dataset respectively, and its average processing time per frame is
approximately 10 ms on Intel i7 x86 and 100ms on NVIDIA Xavier board.
Related papers
- Dense Optical Tracking: Connecting the Dots [82.79642869586587]
DOT is a novel, simple and efficient method for solving the problem of point tracking in a video.
We show that DOT is significantly more accurate than current optical flow techniques, outperforms sophisticated "universal trackers" like OmniMotion, and is on par with, or better than, the best point tracking algorithms like CoTracker.
arXiv Detail & Related papers (2023-12-01T18:59:59Z) - Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous
Driving via Differentiable Multi-Sensor Kalman Filter [11.081218144245506]
We propose a novel 3D multi-object cooperative tracking algorithm for autonomous driving via a differentiable multi-sensor Kalman Filter.
Our algorithm improves the tracking accuracy by 17% with only 0.037x communication costs compared with the state-of-the-art method in V2V4Real.
arXiv Detail & Related papers (2023-09-26T04:14:13Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - A Pedestrian Detection and Tracking Framework for Autonomous Cars:
Efficient Fusion of Camera and LiDAR Data [0.17205106391379021]
This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data.
The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates.
The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene.
arXiv Detail & Related papers (2021-08-27T16:16:01Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - SDOF-Tracker: Fast and Accurate Multiple Human Tracking by
Skipped-Detection and Optical-Flow [5.041369269600902]
This study aims to improve running speed by performing human detection at a certain frame interval.
We propose a method that complements the detection results with optical flow, based on the fact that someone's appearance does not change much between adjacent frames.
On the MOT20 dataset in the MOTChallenge, the proposed SDOF-Tracker achieved the best performance in terms of the total running speed.
arXiv Detail & Related papers (2021-06-27T15:35:35Z) - Extraction and Assessment of Naturalistic Human Driving Trajectories
from Infrastructure Camera and Radar Sensors [0.0]
We present a novel methodology to extract trajectories of traffic objects using infrastructure sensors.
Our vision pipeline accurately detects objects, fuses camera and radar detections and tracks them over time.
We show that our sensor fusion approach successfully combines the advantages of camera and radar detections and outperforms either single sensor.
arXiv Detail & Related papers (2020-04-02T22:28:29Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.