Cyber Mobility Mirror for Enabling Cooperative Driving Automation: A
Co-Simulation Platform
- URL: http://arxiv.org/abs/2201.09463v1
- Date: Mon, 24 Jan 2022 05:27:20 GMT
- Title: Cyber Mobility Mirror for Enabling Cooperative Driving Automation: A
Co-Simulation Platform
- Authors: Zhengwei Bai, Guoyuan Wu, Xuewei Qi, Kentaro Oguchi, Matthew J. Barth
- Abstract summary: Co-simulation platform can simulate both the real world with a high-fidelity sensor perception system and the cyber world with a real-time 3D reconstruction system.
Mirror-world simulator is responsible for reconstructing 3D objects and their trajectories from the perceived information.
Roadside LiDAR-based real-time vehicle detection and 3D reconstruction system is prototyped as a study case.
- Score: 16.542137414609606
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Endowed with automation and connectivity, Connected and Automated Vehicles
(CAVs) are meant to be a revolutionary promoter for Cooperative Driving
Automation (CDA). Nevertheless, CAVs need high-fidelity perception information
on their surroundings, which is available but costly to collect from various
on-board sensors, such as radar, camera, and LiDAR, as well as
vehicle-to-everything (V2X) communications. Therefore, precisely simulating the
sensing process with high-fidelity sensor inputs and timely retrieving the
perception information via a cost-effective platform are of increasing
significance for enabling CDA-related research, e.g., development of
decision-making or control module. Most state-of-the-art traffic simulation
studies for CAVs rely on the situation-awareness information by directly
calling on intrinsic attributes of the objects, which impedes the reliability
and fidelity for testing and validation of CDA algorithms. In this study, a
co-simulation platform is developed, which can simulate both the real world
with a high-fidelity sensor perception system and the cyber world (or "mirror"
world) with a real-time 3D reconstruction system. Specifically, the real-world
simulator is mainly in charge of simulating the road-users (such as vehicles,
bicyclists, and pedestrians), infrastructure (e.g., traffic signals and
roadside sensors) as well as the object detection process. The mirror-world
simulator is responsible for reconstructing 3D objects and their trajectories
from the perceived information (provided by those roadside sensors in the
real-world simulator) to support the development and evaluation of CDA
algorithms. To illustrate the efficacy of this co-simulation platform, a
roadside LiDAR-based real-time vehicle detection and 3D reconstruction system
is prototyped as a study case.
Related papers
- DrivingSphere: Building a High-fidelity 4D World for Closed-loop Simulation [54.02069690134526]
We propose DrivingSphere, a realistic and closed-loop simulation framework.
Its core idea is to build 4D world representation and generate real-life and controllable driving scenarios.
By providing a dynamic and realistic simulation environment, DrivingSphere enables comprehensive testing and validation of autonomous driving algorithms.
arXiv Detail & Related papers (2024-11-18T03:00:33Z) - A Joint Approach Towards Data-Driven Virtual Testing for Automated Driving: The AVEAS Project [2.4163276807189282]
There is a significant shortage of real-world data to parametrize and/or validate simulations.
This paper presents the results of the German AVEAS research project.
arXiv Detail & Related papers (2024-05-10T07:36:03Z) - CADSim: Robust and Scalable in-the-wild 3D Reconstruction for
Controllable Sensor Simulation [44.83732884335725]
Sensor simulation involves modeling traffic participants, such as vehicles, with high quality appearance and articulated geometry.
Current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise.
We present CADSim, which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry.
arXiv Detail & Related papers (2023-11-02T17:56:59Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Smart Infrastructure: A Research Junction [5.172393727004225]
We introduce an intelligent research infrastructure equipped with visual sensor technology, located at a public inner-city junction in Aschaffenburg, Germany.
A multiple-view camera system monitors the traffic situation to perceive road users' behavior.
The system is used for research in data generation, evaluating new HAD sensors systems, algorithms, and Artificial Intelligence (AI) training strategies.
arXiv Detail & Related papers (2023-07-12T14:04:12Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Cyber Mobility Mirror: Deep Learning-based Real-time 3D Object
Perception and Reconstruction Using Roadside LiDAR [14.566471856473813]
Cyber Mobility Mirror is a next-generation real-time traffic surveillance system for 3D object detection, classification, tracking, and reconstruction.
Results from field tests demonstrate that our prototype system can provide satisfactory perception performance with 96.99% precision and 83.62% recall.
High-fidelity real-time traffic conditions can be displayed on the GUI of the equipped vehicle with a frequency of 3-4 Hz.
arXiv Detail & Related papers (2022-02-28T01:58:24Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Ego-motion and Surrounding Vehicle State Estimation Using a Monocular
Camera [11.29865843123467]
We propose a novel machine learning method to estimate ego-motion and surrounding vehicle state using a single monocular camera.
Our approach is based on a combination of three deep neural networks to estimate the 3D vehicle bounding box, depth, and optical flow from a sequence of images.
arXiv Detail & Related papers (2020-05-04T16:41:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.