COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles
- URL: http://arxiv.org/abs/2205.02222v1
- Date: Wed, 4 May 2022 17:55:12 GMT
- Title: COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles
- Authors: Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, Yuke Zhu
- Abstract summary: We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
- Score: 54.61668577827041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical sensors and learning algorithms for autonomous vehicles have
dramatically advanced in the past few years. Nonetheless, the reliability of
today's autonomous vehicles is hindered by the limited line-of-sight sensing
capability and the brittleness of data-driven methods in handling extreme
situations. With recent developments of telecommunication technologies,
cooperative perception with vehicle-to-vehicle communications has become a
promising paradigm to enhance autonomous driving in dangerous or emergency
situations. We introduce COOPERNAUT, an end-to-end learning model that uses
cross-vehicle perception for vision-based cooperative driving. Our model
encodes LiDAR information into compact point-based representations that can be
transmitted as messages between vehicles via realistic wireless channels. To
evaluate our model, we develop AutoCastSim, a network-augmented driving
simulation framework with example accident-prone scenarios. Our experiments on
AutoCastSim suggest that our cooperative perception driving models lead to a
40% improvement in average success rate over egocentric driving models in these
challenging driving situations and a 5 times smaller bandwidth requirement than
prior work V2VNet. COOPERNAUT and AUTOCASTSIM are available at
https://ut-austin-rpl.github.io/Coopernaut/.
Related papers
- Learning Driver Models for Automated Vehicles via Knowledge Sharing and
Personalization [2.07180164747172]
This paper describes a framework for learning Automated Vehicles (AVs) driver models via knowledge sharing between vehicles and personalization.
It finds several applications across transportation engineering including intelligent transportation systems, traffic management, and vehicle-to-vehicle communication.
arXiv Detail & Related papers (2023-08-31T17:18:15Z) - Selective Communication for Cooperative Perception in End-to-End
Autonomous Driving [8.680676599607123]
We propose a novel selective communication algorithm for cooperative perception.
Our algorithm is shown to produce higher success rates than a random selection approach on previously studied safety-critical driving scenario simulations.
arXiv Detail & Related papers (2023-05-26T18:13:17Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - CARNet: A Dynamic Autoencoder for Learning Latent Dynamics in Autonomous
Driving Tasks [11.489187712465325]
An autonomous driving system should effectively use the information collected from the various sensors in order to form an abstract description of the world.
Deep learning models, such as autoencoders, can be used for that purpose, as they can learn compact latent representations from a stream of incoming data.
This work proposes CARNet, a Combined dynAmic autoencodeR NETwork architecture that utilizes an autoencoder combined with a recurrent neural network to learn the current latent representation.
arXiv Detail & Related papers (2022-05-18T04:15:42Z) - Collaborative Driving: Learning- Aided Joint Topology Formulation and
Beamforming [24.54541437306899]
We envision collaborative autonomous driving, a new framework that jointly controls driving topology and formulate vehicular networks in the mmWave/THz bands.
As a swarm intelligence system, the collaborative driving scheme goes beyond existing autonomous driving patterns based on single-vehicle intelligence.
We show two promising approaches for mmWave/THz-based vehicle-to-vehicle (V2V) communications.
arXiv Detail & Related papers (2022-03-18T12:50:35Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.