Holistic Grid Fusion Based Stop Line Estimation
- URL: http://arxiv.org/abs/2009.09093v1
- Date: Fri, 18 Sep 2020 21:29:06 GMT
- Title: Holistic Grid Fusion Based Stop Line Estimation
- Authors: Runsheng Xu, Faezeh Tafazzoli, Li Zhang, Timo Rehfeld, Gunther Krehl,
Arunava Seal
- Abstract summary: Knowing where to stop in advance in an intersection is an essential parameter in controlling the longitudinal velocity of the vehicle.
Most of the existing methods in literature solely use cameras to detect stop lines, which is typically not sufficient in terms of detection range.
We propose a method that takes advantage of fused multi-sensory data including stereo camera and lidar as input and utilizes a carefully designed convolutional neural network architecture to detect stop lines.
- Score: 5.5476621209686225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intersection scenarios provide the most complex traffic situations in
Autonomous Driving and Driving Assistance Systems. Knowing where to stop in
advance in an intersection is an essential parameter in controlling the
longitudinal velocity of the vehicle. Most of the existing methods in
literature solely use cameras to detect stop lines, which is typically not
sufficient in terms of detection range. To address this issue, we propose a
method that takes advantage of fused multi-sensory data including stereo camera
and lidar as input and utilizes a carefully designed convolutional neural
network architecture to detect stop lines. Our experiments show that the
proposed approach can improve detection range compared to camera data alone,
works under heavy occlusion without observing the ground markings explicitly,
is able to predict stop lines for all lanes and allows detection at a distance
up to 50 meters.
Related papers
- Homography Guided Temporal Fusion for Road Line and Marking Segmentation [73.47092021519245]
Road lines and markings are frequently occluded in the presence of moving vehicles, shadow, and glare.
We propose a Homography Guided Fusion (HomoFusion) module to exploit temporally-adjacent video frames for complementary cues.
We show that exploiting available camera intrinsic data and ground plane assumption for cross-frame correspondence can lead to a light-weight network with significantly improved performances in speed and accuracy.
arXiv Detail & Related papers (2024-04-11T10:26:40Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Threat Detection In Self-Driving Vehicles Using Computer Vision [0.0]
We propose a threat detection mechanism for autonomous self-driving cars using dashcam videos.
There are four major components, namely, YOLO to identify the objects, advanced lane detection algorithm, multi regression model to measure the distance of the object from the camera.
The final accuracy of our proposed Threat Detection Model (TDM) is 82.65%.
arXiv Detail & Related papers (2022-09-06T12:01:07Z) - Blind-Spot Collision Detection System for Commercial Vehicles Using
Multi Deep CNN Architecture [0.17499351967216337]
Two convolutional neural networks (CNNs) based on high-level feature descriptors are proposed to detect blind-spot collisions for heavy vehicles.
A fusion approach is proposed to integrate two pre-trained networks for extracting high level features for blind-spot vehicle detection.
The fusion of features significantly improves the performance of faster R-CNN and outperformed the existing state-of-the-art methods.
arXiv Detail & Related papers (2022-08-17T11:10:37Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - A Pedestrian Detection and Tracking Framework for Autonomous Cars:
Efficient Fusion of Camera and LiDAR Data [0.17205106391379021]
This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data.
The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates.
The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene.
arXiv Detail & Related papers (2021-08-27T16:16:01Z) - Phase Space Reconstruction Network for Lane Intrusion Action Recognition [9.351931162958465]
In this paper, we propose a novel object-level phase space reconstruction network (PSRNet) for motion time series classification.
Our PSRNet could reach the best accuracy of 98.0%, which remarkably exceeds existing action recognition approaches by more than 30%.
arXiv Detail & Related papers (2021-02-22T16:18:35Z) - LDNet: End-to-End Lane Marking Detection Approach Using a Dynamic Vision
Sensor [0.0]
This paper explores the novel application of lane marking detection using an event camera.
The spatial resolution of the encoded features is retained by a dense atrous spatial pyramid pooling block.
The efficacy of the proposed work is evaluated using the DVS dataset for lane extraction.
arXiv Detail & Related papers (2020-09-17T02:15:41Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.