Decentralized cooperative perception for autonomous vehicles: Learning
to value the unknown
- URL: http://arxiv.org/abs/2301.01250v1
- Date: Mon, 12 Dec 2022 00:01:27 GMT
- Title: Decentralized cooperative perception for autonomous vehicles: Learning
to value the unknown
- Authors: Maxime Chaveroche, Franck Davoine, V\'eronique Cherfaoui
- Abstract summary: We propose a decentralized collaboration, i.e. peer-to-peer, in which the agents are active in their quest for full perception.
We propose a way to learn a communication policy that reverses the usual communication paradigm by only requesting from other vehicles what is unknown to the ego-vehicle.
In particular, we propose Locally Predictable VAE (LP-VAE), which appears to be producing better belief states for predictions than state-of-the-art models.
- Score: 1.2246649738388387
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, we have been witnesses of accidents involving autonomous vehicles
and their lack of sufficient information. One way to tackle this issue is to
benefit from the perception of different view points, namely cooperative
perception. We propose here a decentralized collaboration, i.e. peer-to-peer,
in which the agents are active in their quest for full perception by asking for
specific areas in their surroundings on which they would like to know more.
Ultimately, we want to optimize a trade-off between the maximization of
knowledge about moving objects and the minimization of the total volume of
information received from others, to limit communication costs and message
processing time. For this, we propose a way to learn a communication policy
that reverses the usual communication paradigm by only requesting from other
vehicles what is unknown to the ego-vehicle, instead of filtering on the sender
side. We tested three different generative models to be taken as base for a
Deep Reinforcement Learning (DRL) algorithm, and compared them to a
broadcasting policy and a policy randomly selecting areas. In particular, we
propose Locally Predictable VAE (LP-VAE), which appears to be producing better
belief states for predictions than state-of-the-art models, both as a
standalone model and in the context of DRL. Experiments were conducted in the
driving simulator CARLA. Our best models reached on average a gain of 25% of
the total complementary information, while only requesting about 5% of the
ego-vehicle's perceptual field. This trade-off is adjustable through the
interpretable hyperparameters of our reward function.
Related papers
- V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multi-Modal Large Language Models [31.537045261401666]
Vehicle-to-vehicle (V2V) communication has been proposed, but they have tended to focus on detection and tracking.
We propose a novel problem setting that integrates an Large Language Models (LLMs) into cooperative autonomous driving.
We also propose our baseline method Vehicle-to-Vehicle Large Language Model (V2V-LLM), which uses an LLM to fuse perception information from multiple connected autonomous vehicles.
arXiv Detail & Related papers (2025-02-14T08:05:41Z) - Transfer Your Perspective: Controllable 3D Generation from Any Viewpoint in a Driving Scene [56.73568220959019]
Collaborative autonomous driving (CAV) seems like a promising direction, but collecting data for development is non-trivial.
We introduce a novel surrogate to the rescue, which is to generate realistic perception from different viewpoints in a driving scene.
We present the very first solution, using a combination of simulated collaborative data and real ego-car data.
arXiv Detail & Related papers (2025-02-10T17:07:53Z) - Demystifying the Physics of Deep Reinforcement Learning-Based Autonomous Vehicle Decision-Making [6.243971093896272]
We use a continuous proximal policy optimization-based DRL algorithm as the baseline model and add a multi-head attention framework in an open-source AV simulation environment.
We show that the weights in the first head encode the positions of the neighboring vehicles while the second head focuses on the leader vehicle exclusively.
arXiv Detail & Related papers (2024-03-18T02:59:13Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Divide-and-Conquer for Lane-Aware Diverse Trajectory Prediction [71.97877759413272]
Trajectory prediction is a safety-critical tool for autonomous vehicles to plan and execute actions.
Recent methods have achieved strong performances using Multi-Choice Learning objectives like winner-takes-all (WTA) or best-of-many.
Our work addresses two key challenges in trajectory prediction, learning outputs, and better predictions by imposing constraints using driving knowledge.
arXiv Detail & Related papers (2021-04-16T17:58:56Z) - Injecting Knowledge in Data-driven Vehicle Trajectory Predictors [82.91398970736391]
Vehicle trajectory prediction tasks have been commonly tackled from two perspectives: knowledge-driven or data-driven.
In this paper, we propose to learn a "Realistic Residual Block" (RRB) which effectively connects these two perspectives.
Our proposed method outputs realistic predictions by confining the residual range and taking into account its uncertainty.
arXiv Detail & Related papers (2021-03-08T16:03:09Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.