Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study
- URL: http://arxiv.org/abs/2112.04042v1
- Date: Tue, 7 Dec 2021 23:42:21 GMT
- Title: Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study
- Authors: Yongkang Liu, Ziran Wang, Kyungtae Han, Zhenyu Shou, Prashant Tiwari,
John H.L. Hansen
- Abstract summary: We introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions.
A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology.
- Score: 38.65843674620544
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: With the rapid development of intelligent vehicles and Advanced
Driver-Assistance Systems (ADAS), a new trend is that mixed levels of human
driver engagements will be involved in the transportation system. Therefore,
necessary visual guidance for drivers is vitally important under this situation
to prevent potential risks. To advance the development of visual guidance
systems, we introduce a novel vision-cloud data fusion methodology, integrating
camera image and Digital Twin information from the cloud to help intelligent
vehicles make better decisions. Target vehicle bounding box is drawn and
matched with the help of the object detector (running on the ego-vehicle) and
position information (received from the cloud). The best matching result, a
79.2% accuracy under 0.7 intersection over union threshold, is obtained with
depth images served as an additional feature source. A case study on lane
change prediction is conducted to show the effectiveness of the proposed data
fusion methodology. In the case study, a multi-layer perceptron algorithm is
proposed with modified lane change prediction approaches. Human-in-the-loop
simulation results obtained from the Unity game engine reveal that the proposed
model can improve highway driving performance significantly in terms of safety,
comfort, and environmental sustainability.
Related papers
- DiFSD: Ego-Centric Fully Sparse Paradigm with Uncertainty Denoising and Iterative Refinement for Efficient End-to-End Autonomous Driving [55.53171248839489]
We propose an ego-centric fully sparse paradigm, named DiFSD, for end-to-end self-driving.
Specifically, DiFSD mainly consists of sparse perception, hierarchical interaction and iterative motion planner.
Experiments conducted on nuScenes dataset demonstrate the superior planning performance and great efficiency of DiFSD.
arXiv Detail & Related papers (2024-09-15T15:55:24Z) - RainSD: Rain Style Diversification Module for Image Synthesis
Enhancement using Feature-Level Style Distribution [5.500457283114346]
This paper presents a synthetic road dataset with sensor blockage generated from real road dataset BDD100K.
Using this dataset, the degradation of diverse multi-task networks for autonomous driving has been thoroughly evaluated and analyzed.
The tendency of the performance degradation of deep neural network-based perception systems for autonomous vehicle has been analyzed in depth.
arXiv Detail & Related papers (2023-12-31T11:30:42Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Monocular Vision-based Prediction of Cut-in Maneuvers with LSTM Networks [0.0]
This study proposes a method to predict potentially dangerous cut-in maneuvers happening in the ego lane.
We follow a computer vision-based approach that only employs a single in-vehicle RGB camera.
Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step.
arXiv Detail & Related papers (2022-03-21T02:30:36Z) - DRIVE: Deep Reinforced Accident Anticipation with Visual Explanation [36.350348194248014]
Traffic accident anticipation aims to accurately and promptly predict the occurrence of a future accident from dashcam videos.
Existing approaches typically focus on capturing the cues of spatial and temporal context before a future accident occurs.
We propose Deep ReInforced accident anticipation with Visual Explanation, named DRIVE.
arXiv Detail & Related papers (2021-07-21T16:33:21Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Vehicle trajectory prediction in top-view image sequences based on deep
learning method [1.181206257787103]
Estimating and predicting surrounding vehicles' movement is essential for an automated vehicle and advanced safety systems.
A model with low computational complexity is proposed, which is trained by images taken from the road's aerial image.
The proposed model can predict the vehicle's future path in any freeway only by viewing the images related to the history of the target vehicle's movement and its neighbors.
arXiv Detail & Related papers (2021-02-02T20:48:19Z) - Sensor Fusion of Camera and Cloud Digital Twin Information for
Intelligent Vehicles [26.00647601539363]
We introduce a novel sensor fusion methodology, integrating camera image and Digital Twin knowledge from the cloud.
The best matching result, with a 79.2% accuracy under 0.7 Intersection over Union (IoU) threshold, is obtained with depth image served as an additional feature source.
Game engine-based simulation results also reveal that the visual guidance system could improve driving safety significantly cooperate with the cloud Digital Twin system.
arXiv Detail & Related papers (2020-07-08T18:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.