Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning
- URL: http://arxiv.org/abs/2012.03414v1
- Date: Mon, 7 Dec 2020 02:09:15 GMT
- Title: Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning
- Authors: Mohamed K. Abdel-Aziz, Cristina Perfecto, Sumudu Samarakoon, Mehdi
Bennis, Walid Saad
- Abstract summary: A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
- Score: 101.64598586454571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative perception plays a vital role in extending a vehicle's sensing
range beyond its line-of-sight. However, exchanging raw sensory data under
limited communication resources is infeasible. Towards enabling an efficient
cooperative perception, vehicles need to address the following fundamental
question: What sensory data needs to be shared?, at which resolution?, and with
which vehicles? To answer this question, in this paper, a novel framework is
proposed to allow reinforcement learning (RL)-based vehicular association,
resource block (RB) allocation, and content selection of cooperative perception
messages (CPMs) by utilizing a quadtree-based point cloud compression
mechanism. Furthermore, a federated RL approach is introduced in order to speed
up the training process across vehicles. Simulation results show the ability of
the RL agents to efficiently learn the vehicles' association, RB allocation,
and message content selection while maximizing vehicles' satisfaction in terms
of the received sensory information. The results also show that federated RL
improves the training process, where better policies can be achieved within the
same amount of time compared to the non-federated approach.
Related papers
- SPformer: A Transformer Based DRL Decision Making Method for Connected Automated Vehicles [9.840325772591024]
We propose a CAV decision-making architecture based on transformer and reinforcement learning algorithms.
A learnable policy token is used as the learning medium of the multi-vehicle joint policy.
Our model can make good use of all the state information of vehicles in traffic scenario.
arXiv Detail & Related papers (2024-09-23T15:16:35Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Causality-Driven Reinforcement Learning for Joint Communication and Sensing [4.165335263540595]
We propose a causally-aware RL agent which can intervene and discover causal relationships for mMIMO-based JCAS environments.
We use a state dependent action dimension selection strategy to realize causal discovery for RL-based JCAS.
arXiv Detail & Related papers (2024-09-07T07:15:57Z) - Communication-Aware Reinforcement Learning for Cooperative Adaptive Cruise Control [15.31488551912888]
Reinforcement Learning (RL) has proven effective in optimizing complex decision-making processes in CACC.
MARL has shown remarkable potential by enabling coordinated actions among multiple CAVs.
We propose Communication-Aware Reinforcement Learning (CA-RL) to address these challenges.
arXiv Detail & Related papers (2024-07-12T03:28:24Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - Decentralized cooperative perception for autonomous vehicles: Learning
to value the unknown [1.2246649738388387]
We propose a decentralized collaboration, i.e. peer-to-peer, in which the agents are active in their quest for full perception.
We propose a way to learn a communication policy that reverses the usual communication paradigm by only requesting from other vehicles what is unknown to the ego-vehicle.
In particular, we propose Locally Predictable VAE (LP-VAE), which appears to be producing better belief states for predictions than state-of-the-art models.
arXiv Detail & Related papers (2022-12-12T00:01:27Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - INFOrmation Prioritization through EmPOWERment in Visual Model-Based RL [90.06845886194235]
We propose a modified objective for model-based reinforcement learning (RL)
We integrate a term inspired by variational empowerment into a state-space model based on mutual information.
We evaluate the approach on a suite of vision-based robot control tasks with natural video backgrounds.
arXiv Detail & Related papers (2022-04-18T23:09:23Z) - Attacking Deep Reinforcement Learning-Based Traffic Signal Control
Systems with Colluding Vehicles [4.2455052426413085]
This paper formulates a novel task in which a group of vehicles can cooperatively send falsified information to "cheat" DRL-based ATCS.
CollusionVeh is a generic and effective vehicle-colluding framework composed of a road situation encoder, a vehicle interpreter, and a communication mechanism.
The research outcomes could help improve the reliability and robustness of the ATCS and better protect the smart mobility systems.
arXiv Detail & Related papers (2021-11-04T13:10:33Z) - Reinforcement Learning Based Vehicle-cell Association Algorithm for
Highly Mobile Millimeter Wave Communication [53.47785498477648]
This paper investigates the problem of vehicle-cell association in millimeter wave (mmWave) communication networks.
We first formulate the user state (VU) problem as a discrete non-vehicle association optimization problem.
The proposed solution achieves up to 15% gains in terms sum of user complexity and 20% reduction in VUE compared to several baseline designs.
arXiv Detail & Related papers (2020-01-22T08:51:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.