Sensor Fusion of Camera and Cloud Digital Twin Information for
Intelligent Vehicles
- URL: http://arxiv.org/abs/2007.04350v1
- Date: Wed, 8 Jul 2020 18:09:54 GMT
- Title: Sensor Fusion of Camera and Cloud Digital Twin Information for
Intelligent Vehicles
- Authors: Yongkang Liu, Ziran Wang, Kyungtae Han, Zhenyu Shou, Prashant Tiwari,
and John H. L. Hansen
- Abstract summary: We introduce a novel sensor fusion methodology, integrating camera image and Digital Twin knowledge from the cloud.
The best matching result, with a 79.2% accuracy under 0.7 Intersection over Union (IoU) threshold, is obtained with depth image served as an additional feature source.
Game engine-based simulation results also reveal that the visual guidance system could improve driving safety significantly cooperate with the cloud Digital Twin system.
- Score: 26.00647601539363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of intelligent vehicles and Advanced Driving
Assistance Systems (ADAS), a mixed level of human driver engagements is
involved in the transportation system. Visual guidance for drivers is essential
under this situation to prevent potential risks. To advance the development of
visual guidance systems, we introduce a novel sensor fusion methodology,
integrating camera image and Digital Twin knowledge from the cloud. Target
vehicle bounding box is drawn and matched by combining results of object
detector running on ego vehicle and position information from the cloud. The
best matching result, with a 79.2% accuracy under 0.7 Intersection over Union
(IoU) threshold, is obtained with depth image served as an additional feature
source. Game engine-based simulation results also reveal that the visual
guidance system could improve driving safety significantly cooperate with the
cloud Digital Twin system.
Related papers
- Enhancing Track Management Systems with Vehicle-To-Vehicle Enabled Sensor Fusion [0.0]
This paper proposes a novel Vehicle-to-Vehicle (V2V) enabled track management system.
The core innovation lies in the creation of independent priority track lists, consisting of fused detections validated through V2V communication.
The proposed system considers the implications of falsification of V2X signals which is combated through an initial vehicle identification process using detection from perception sensors.
arXiv Detail & Related papers (2024-04-26T20:54:44Z) - HawkDrive: A Transformer-driven Visual Perception System for Autonomous Driving in Night Scene [2.5022287664959446]
HawkDrive is a novel vision system with hardware and software solutions.
Hardware that utilizes stereo vision perception, is partnered with the edge computing device Nvidia Jetson Xavier AGX.
Our software for low light enhancement, depth estimation, and semantic segmentation tasks, is a transformer-based neural network.
arXiv Detail & Related papers (2024-04-06T15:10:29Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Cross-Dataset Experimental Study of Radar-Camera Fusion in Bird's-Eye
View [12.723455775659414]
Radar and camera fusion systems have the potential to provide a highly robust and reliable perception system.
Recent advances in camera-based object detection offer new radar-camera fusion possibilities with bird's eye view feature maps.
We propose a novel and flexible fusion network and evaluate its performance on two datasets.
arXiv Detail & Related papers (2023-09-27T08:02:58Z) - Smart Infrastructure: A Research Junction [5.172393727004225]
We introduce an intelligent research infrastructure equipped with visual sensor technology, located at a public inner-city junction in Aschaffenburg, Germany.
A multiple-view camera system monitors the traffic situation to perceive road users' behavior.
The system is used for research in data generation, evaluating new HAD sensors systems, algorithms, and Artificial Intelligence (AI) training strategies.
arXiv Detail & Related papers (2023-07-12T14:04:12Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study [38.65843674620544]
We introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions.
A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology.
arXiv Detail & Related papers (2021-12-07T23:42:21Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.