RITA: Boost Driving Simulators with Realistic Interactive Traffic Flow
- URL: http://arxiv.org/abs/2211.03408v5
- Date: Fri, 8 Dec 2023 04:23:10 GMT
- Title: RITA: Boost Driving Simulators with Realistic Interactive Traffic Flow
- Authors: Zhengbang Zhu, Shenyu Zhang, Yuzheng Zhuang, Yuecheng Liu, Minghuan
Liu, Liyuan Mao, Ziqin Gong, Shixiong Kai, Qiang Gu, Bin Wang, Siyuan Cheng,
Xinyu Wang, Jianye Hao and Yong Yu
- Abstract summary: We propose Realistic Interactive TrAffic flow (RITA) as an integrated component of existing driving simulators.
RITA is developed with consideration of three key features, i.e., fidelity, diversity, and controllability.
We demonstrate RITA's capacity to create diversified and high-fidelity traffic simulations in several highly interactive highway scenarios.
- Score: 42.547724297198585
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: High-quality traffic flow generation is the core module in building
simulators for autonomous driving. However, the majority of available
simulators are incapable of replicating traffic patterns that accurately
reflect the various features of real-world data while also simulating
human-like reactive responses to the tested autopilot driving strategies.
Taking one step forward to addressing such a problem, we propose Realistic
Interactive TrAffic flow (RITA) as an integrated component of existing driving
simulators to provide high-quality traffic flow for the evaluation and
optimization of the tested driving strategies. RITA is developed with
consideration of three key features, i.e., fidelity, diversity, and
controllability, and consists of two core modules called RITABackend and
RITAKit. RITABackend is built to support vehicle-wise control and provide
traffic generation models from real-world datasets, while RITAKit is developed
with easy-to-use interfaces for controllable traffic generation via
RITABackend. We demonstrate RITA's capacity to create diversified and
high-fidelity traffic simulations in several highly interactive highway
scenarios. The experimental findings demonstrate that our produced RITA traffic
flows exhibit all three key features, hence enhancing the completeness of
driving strategy evaluation. Moreover, we showcase the possibility for further
improvement of baseline strategies through online fine-tuning with RITA traffic
flows.
Related papers
- Traffic Co-Simulation Framework Empowered by Infrastructure Camera Sensing and Reinforcement Learning [4.336971448707467]
Multi-agent reinforcement learning (MARL) is particularly effective for learning control strategies for traffic lights in a network using iterative simulations.
This study proposes a co-simulation framework integrating CARLA and SUMO, which combines high-fidelity 3D modeling with large-scale traffic flow simulation.
Experiments in the test-bed demonstrate the effectiveness of the proposed MARL approach in enhancing traffic conditions using real-time camera-based detection.
arXiv Detail & Related papers (2024-12-05T07:01:56Z) - Revolutionizing Traffic Sign Recognition: Unveiling the Potential of Vision Transformers [0.0]
Traffic Sign Recognition (TSR) holds a vital role in advancing driver assistance systems and autonomous vehicles.
This study explores three variants of Vision Transformers (PVT, TNT, LNL) and six convolutional neural networks (AlexNet, ResNet, VGG16, MobileNet, EfficientNet, GoogleNet) as baseline models.
To address the shortcomings of traditional methods, a novel pyramid EATFormer backbone is proposed, amalgamating Evolutionary Algorithms (EAs) with the Transformer architecture.
arXiv Detail & Related papers (2024-04-29T19:18:52Z) - CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning [38.63187494867502]
CtRL-Sim is a method that leverages return-conditioned offline reinforcement learning (RL) to efficiently generate reactive and controllable traffic agents.
We show that CtRL-Sim can generate realistic safety-critical scenarios while providing fine-grained control over agent behaviours.
arXiv Detail & Related papers (2024-03-29T02:10:19Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Development of A Stochastic Traffic Environment with Generative
Time-Series Models for Improving Generalization Capabilities of Autonomous
Driving Agents [0.0]
We develop a data driven traffic simulator by training a generative adverserial network (GAN) on real life trajectory data.
The simulator generates randomized trajectories that resembles real life traffic interactions between vehicles.
We demonstrate through simulations that RL agents trained on GAN-based traffic simulator has stronger generalization capabilities compared to RL agents trained on simple rule-driven simulators.
arXiv Detail & Related papers (2020-06-10T13:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.