Multi-Agent Reinforcement Learning for Channel Assignment and Power
Allocation in Platoon-Based C-V2X Systems
- URL: http://arxiv.org/abs/2011.04555v2
- Date: Sun, 19 Jun 2022 22:26:11 GMT
- Title: Multi-Agent Reinforcement Learning for Channel Assignment and Power
Allocation in Platoon-Based C-V2X Systems
- Authors: Hung V. Vu, Mohammad Farzanullah, Zheyu Liu, Duy H. N. Nguyen, Robert
Morawski and Tho Le-Ngoc
- Abstract summary: We consider the problem of joint channel assignment and power allocation in underlaid cellular vehicular-to-everything (C-V2X) systems.
Our proposed distributed resource allocation algorithm provides a close performance compared to that of the well-known exhaustive search algorithm.
- Score: 15.511438222357489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of joint channel assignment and power allocation in
underlaid cellular vehicular-to-everything (C-V2X) systems where multiple
vehicle-to-network (V2N) uplinks share the time-frequency resources with
multiple vehicle-to-vehicle (V2V) platoons that enable groups of connected and
autonomous vehicles to travel closely together. Due to the nature of high user
mobility in vehicular environment, traditional centralized optimization
approach relying on global channel information might not be viable in C-V2X
systems with large number of users. Utilizing a multi-agent reinforcement
learning (RL) approach, we propose a distributed resource allocation (RA)
algorithm to overcome this challenge. Specifically, we model the RA problem as
a multi-agent system. Based solely on the local channel information, each
platoon leader, acting as an agent, collectively interacts with each other and
accordingly selects the optimal combination of sub-band and power level to
transmit its signals. Toward this end, we utilize the double deep Q-learning
algorithm to jointly train the agents under the objectives of simultaneously
maximizing the sum-rate of V2N links and satisfying the packet delivery
probability of each V2V link in a desired latency limitation. Simulation
results show that our proposed RL-based algorithm provides a close performance
compared to that of the well-known exhaustive search algorithm.
Related papers
- Semantic-Aware Resource Management for C-V2X Platooning via Multi-Agent Reinforcement Learning [28.375064269304975]
This paper presents a semantic-aware multi-modal resource allocation (SAMRA) for multi-task using multi-agent reinforcement learning (MARL)
The proposed approach leverages the semantic information to optimize the allocation of communication resources.
It achieves significant gains in quality of experience (QoE) and communication efficiency in C-V2X platooning scenarios.
arXiv Detail & Related papers (2024-11-07T12:55:35Z) - Deep-Reinforcement-Learning-Based AoI-Aware Resource Allocation for RIS-Aided IoV Networks [43.443526528832145]
We propose a RIS-assisted internet of vehicles (IoV) network, considering the vehicle-to-everything (V2X) communication method.
In order to improve the timeliness of vehicle-to-infrastructure (V2I) links and the stability of vehicle-to-vehicle (V2V) links, we introduce the age of information (AoI) model and the payload transmission probability model.
arXiv Detail & Related papers (2024-06-17T06:16:07Z) - DRL-Based RAT Selection in a Hybrid Vehicular Communication Network [2.345902601618188]
Cooperative intelligent transport systems rely on a set of Vehicle-to-Everything (V2X) applications to enhance road safety.
New V2X applications depend on a significant amount of shared data and require high reliability, low end-to-end (E2E) latency, and high throughput.
We propose an intelligent, scalable hybrid vehicular communication architecture that leverages the performance of multiple Radio Access Technologies (RATs) to meet the needs of these applications.
arXiv Detail & Related papers (2024-04-03T08:13:07Z) - Federated Reinforcement Learning for Resource Allocation in V2X Networks [46.6256432514037]
Resource allocation significantly impacts the performance of vehicle-to-everything (V2X) networks.
Most existing algorithms for resource allocation are based on optimization or machine learning.
In this paper, we explore resource allocation in a V2X network under the framework of federated reinforcement learning.
arXiv Detail & Related papers (2023-10-15T15:26:54Z) - Multi-Resource Allocation for On-Device Distributed Federated Learning
Systems [79.02994855744848]
This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system.
Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively.
arXiv Detail & Related papers (2022-11-01T14:16:05Z) - An approach to implement Reinforcement Learning for Heterogeneous
Vehicular Networks [5.349238386983279]
Here, the multiple vehicle-to-vehicle(V2V) links reuse the spectrum of other vehicle-to-interface(V2I) and also those of other networks.
The idea of implementing ML-based methods is used here so that it can be implemented in a distributed manner in all vehicles.
arXiv Detail & Related papers (2022-08-26T07:15:14Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Transfer Learning in Multi-Agent Reinforcement Learning with Double
Q-Networks for Distributed Resource Sharing in V2X Communication [24.442174952832108]
This paper addresses the problem of decentralized spectrum sharing in vehicle-to-everything (V2X) communication networks.
The aim is to provide resource-efficient coexistence of vehicle-to-infrastructure(V2I) and vehicle-to-vehicle(V2V) links.
arXiv Detail & Related papers (2021-07-13T15:50:10Z) - Deep Learning-based Resource Allocation For Device-to-Device
Communication [66.74874646973593]
We propose a framework for the optimization of the resource allocation in multi-channel cellular systems with device-to-device (D2D) communication.
A deep learning (DL) framework is proposed, where the optimal resource allocation strategy for arbitrary channel conditions is approximated by deep neural network (DNN) models.
Our simulation results confirm that near-optimal performance can be attained with low time, which underlines the real-time capability of the proposed scheme.
arXiv Detail & Related papers (2020-11-25T14:19:23Z) - Distributed Reinforcement Learning for Cooperative Multi-Robot Object
Manipulation [53.262360083572005]
We consider solving a cooperative multi-robot object manipulation task using reinforcement learning (RL)
We propose two distributed multi-agent RL approaches: distributed approximate RL (DA-RL) and game-theoretic RL (GT-RL)
Although we focus on a small system of two agents in this paper, both DA-RL and GT-RL apply to general multi-agent systems, and are expected to scale well to large systems.
arXiv Detail & Related papers (2020-03-21T00:43:54Z) - Reinforcement Learning Based Vehicle-cell Association Algorithm for
Highly Mobile Millimeter Wave Communication [53.47785498477648]
This paper investigates the problem of vehicle-cell association in millimeter wave (mmWave) communication networks.
We first formulate the user state (VU) problem as a discrete non-vehicle association optimization problem.
The proposed solution achieves up to 15% gains in terms sum of user complexity and 20% reduction in VUE compared to several baseline designs.
arXiv Detail & Related papers (2020-01-22T08:51:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.