Cooperative Perception with Deep Reinforcement Learning for Connected
Vehicles
- URL: http://arxiv.org/abs/2004.10927v1
- Date: Thu, 23 Apr 2020 01:44:12 GMT
- Title: Cooperative Perception with Deep Reinforcement Learning for Connected
Vehicles
- Authors: Shunsuke Aoki, Takamasa Higuchi, Onur Altintas
- Abstract summary: We present a cooperative perception scheme with deep reinforcement learning to enhance the detection accuracy for the surrounding objects.
Our scheme mitigates the network load in vehicular communication networks and enhances the communication reliability.
- Score: 7.7003495898919265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sensor-based perception on vehicles are becoming prevalent and important to
enhance the road safety. Autonomous driving systems use cameras, LiDAR, and
radar to detect surrounding objects, while human-driven vehicles use them to
assist the driver. However, the environmental perception by individual vehicles
has the limitations on coverage and/or detection accuracy. For example, a
vehicle cannot detect objects occluded by other moving/static obstacles. In
this paper, we present a cooperative perception scheme with deep reinforcement
learning to enhance the detection accuracy for the surrounding objects. By
using the deep reinforcement learning to select the data to transmit, our
scheme mitigates the network load in vehicular communication networks and
enhances the communication reliability. To design, test, and verify the
cooperative perception scheme, we develop a Cooperative & Intelligent Vehicle
Simulation (CIVS) Platform, which integrates three software components: traffic
simulator, vehicle simulator, and object classifier. We evaluate that our
scheme decreases packet loss and thereby increases the detection accuracy by up
to 12%, compared to the baseline protocol.
Related papers
- A neural-network based anomaly detection system and a safety protocol to protect vehicular network [0.0]
This thesis addresses the use of Cooperative Intelligent Transport Systems (CITS) to improve road safety and efficiency by enabling vehicle-to-vehicle communication.
To ensure safety, the thesis proposes a Machine Learning-based Misbehavior Detection System (MDS) using Long Short-Term Memory (LSTM) networks.
arXiv Detail & Related papers (2024-11-11T14:15:59Z) - Improving automatic detection of driver fatigue and distraction using
machine learning [0.0]
Driver fatigue and distracted driving are important factors in traffic accidents.
We present techniques for simultaneously detecting fatigue and distracted driving behaviors using vision-based and machine learning-based approaches.
arXiv Detail & Related papers (2024-01-04T06:33:46Z) - Selective Communication for Cooperative Perception in End-to-End
Autonomous Driving [8.680676599607123]
We propose a novel selective communication algorithm for cooperative perception.
Our algorithm is shown to produce higher success rates than a random selection approach on previously studied safety-critical driving scenario simulations.
arXiv Detail & Related papers (2023-05-26T18:13:17Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Edge-Aided Sensor Data Sharing in Vehicular Communication Networks [8.67588704947974]
We consider sensor data sharing and fusion in a vehicular network with both, vehicle-to-infrastructure and vehicle-to-vehicle communication.
We propose a method, named Bidirectional Feedback Noise Estimation (BiFNoE), in which an edge server collects and caches sensor measurement data from vehicles.
We show that the perception accuracy is on average improved by around 80 % with only 12 kbps uplink and 28 kbps downlink bandwidth.
arXiv Detail & Related papers (2022-06-17T16:30:56Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Collaborative 3D Object Detection for Automatic Vehicle Systems via
Learnable Communications [8.633120731620307]
We propose a novel collaborative 3D object detection framework that consists of three components.
Experiment results and bandwidth usage analysis demonstrate that our approach can save communication and computation costs.
arXiv Detail & Related papers (2022-05-24T07:17:32Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction [74.42961817119283]
We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
arXiv Detail & Related papers (2020-08-17T17:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.