Human-Vehicle Cooperative Visual Perception for Shared Autonomous
Driving
- URL: http://arxiv.org/abs/2112.09298v1
- Date: Fri, 17 Dec 2021 03:17:05 GMT
- Title: Human-Vehicle Cooperative Visual Perception for Shared Autonomous
Driving
- Authors: Yiyue Zhao, Cailin Lei, Yu Shen, Yuchuan Du, Qijun Chen
- Abstract summary: This paper proposes a human-vehicle cooperative visual perception method to enhance the visual perception ability of shared autonomous driving.
Based on transfer learning, the mAP of object detection reaches 75.52% and lays a solid foundation for visual fusion.
This study pioneers a cooperative visual perception solution for shared autonomous driving and experiments in real-world complex traffic conflict scenarios.
- Score: 9.537146822132904
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the development of key technologies like environment perception, the
automation level of autonomous vehicles has been increasing. However, before
reaching highly autonomous driving, manual driving still needs to participate
in the driving process to ensure the safety of human-vehicle shared driving.
The existing human-vehicle cooperative driving focuses on auto engineering and
drivers' behaviors, with few research studies in the field of visual
perception. Due to the bad performance in the complex road traffic conflict
scenarios, cooperative visual perception needs to be studied further. In
addition, the autonomous driving perception system cannot correctly understand
the characteristics of manual driving. Based on the background above, this
paper directly proposes a human-vehicle cooperative visual perception method to
enhance the visual perception ability of shared autonomous driving based on the
transfer learning method and the image fusion algorithm for the complex road
traffic scenarios. Based on transfer learning, the mAP of object detection
reaches 75.52% and lays a solid foundation for visual fusion. And the fusion
experiment further reveals that human-vehicle cooperative visual perception
reflects the riskiest zone and predicts the conflict object's trajectory more
precisely. This study pioneers a cooperative visual perception solution for
shared autonomous driving and experiments in real-world complex traffic
conflict scenarios, which can better support the following planning and
controlling and improve the safety of autonomous vehicles.
Related papers
- Scalable Decentralized Cooperative Platoon using Multi-Agent Deep
Reinforcement Learning [2.5499055723658097]
This paper introduces a vehicle platooning approach designed to enhance traffic flow and safety.
It is developed using deep reinforcement learning in the Unity 3D game engine.
The proposed platooning model focuses on scalability, decentralization, and fostering positive cooperation.
arXiv Detail & Related papers (2023-12-11T22:04:38Z) - Decision Making for Autonomous Driving in Interactive Merge Scenarios
via Learning-based Prediction [39.48631437946568]
This paper focuses on the complex task of merging into moving traffic where uncertainty emanates from the behavior of other drivers.
We frame the problem as a partially observable Markov decision process (POMDP) and solve it online with Monte Carlo tree search.
The solution to the POMDP is a policy that performs high-level driving maneuvers, such as giving way to an approaching car, keeping a safe distance from the vehicle in front or merging into traffic.
arXiv Detail & Related papers (2023-03-29T16:12:45Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Intelligent Perception System for Vehicle-Road Cooperation [0.0]
Vehicle-road cooperation autonomous driving technology can expand the vehicle's perception range, supplement the perception blind area and improve the perception accuracy.
This project mainly uses lidar to develop data fusion schemes to realize the sharing and combination of vehicle and road equipment data and achieve the detection and tracking of dynamic targets.
arXiv Detail & Related papers (2022-08-30T08:10:34Z) - Exploring the trade off between human driving imitation and safety for
traffic simulation [0.34410212782758043]
We show that a trade-off exists between imitating human driving and maintaining safety when learning driving policies.
We propose a multi objective learning algorithm (MOPPO) that improves both objectives together.
arXiv Detail & Related papers (2022-08-09T14:30:19Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Studying Person-Specific Pointing and Gaze Behavior for Multimodal
Referencing of Outside Objects from a Moving Vehicle [58.720142291102135]
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.
Existing outside-the-vehicle referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints.
We investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects.
arXiv Detail & Related papers (2020-09-23T14:56:19Z) - Deep Reinforcement Learning for Human-Like Driving Policies in Collision
Avoidance Tasks of Self-Driving Cars [1.160208922584163]
We introduce a model-free, deep reinforcement learning approach to generate automated human-like driving policies.
We study a static obstacle avoidance task on a two-lane highway road in simulation.
We demonstrate that our approach leads to human-like driving policies.
arXiv Detail & Related papers (2020-06-07T18:20:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.