An approach to implement Reinforcement Learning for Heterogeneous
Vehicular Networks
- URL: http://arxiv.org/abs/2208.12466v1
- Date: Fri, 26 Aug 2022 07:15:14 GMT
- Title: An approach to implement Reinforcement Learning for Heterogeneous
Vehicular Networks
- Authors: Bhavya Peshavaria, Sagar Kavaiya, Dhaval K. Patel
- Abstract summary: Here, the multiple vehicle-to-vehicle(V2V) links reuse the spectrum of other vehicle-to-interface(V2I) and also those of other networks.
The idea of implementing ML-based methods is used here so that it can be implemented in a distributed manner in all vehicles.
- Score: 5.349238386983279
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper presents the extension of the idea of spectrum sharing in the
vehicular networks towards the Heterogeneous Vehicular Network(HetVNET) based
on multi-agent reinforcement learning. Here, the multiple
vehicle-to-vehicle(V2V) links reuse the spectrum of other
vehicle-to-interface(V2I) and also those of other networks. The fast-changing
environment in vehicular networks limits the idea of centralizing the CSI and
allocate the channels. So, the idea of implementing ML-based methods is used
here so that it can be implemented in a distributed manner in all vehicles.
Here each On-Board Unit(OBU) can sense the signals in the channel and based on
that information runs the RL to decide which channel to autonomously take up.
Here, each V2V link will be an agent in MARL. The idea is to train the RL model
in such a way that these agents will collaborate rather than compete.
Related papers
- Spectrum Sharing using Deep Reinforcement Learning in Vehicular Networks [0.14999444543328289]
The paper presents a few results and analyses, demonstrating the efficacy of the DQN model in enhancing spectrum sharing efficiency.
Both SARL and MARL models have exhibited successful rates of V2V communication, with the cumulative reward of the RL model reaching its maximum as training progresses.
arXiv Detail & Related papers (2024-10-16T12:59:59Z) - Joint Channel Selection using FedDRL in V2X [20.96900576250422]
Vehicle-to-everything (V2X) communication technology is revolutionizing transportation by enabling interactions between vehicles, devices, and infrastructures.
In this paper, we study the problem of joint channel selection, where vehicles with different technologies choose one or more Access Points (APs) to transmit messages in a network.
We propose an approach based on Federated Deep Reinforcement Learning (FedDRL), which enables each vehicle to benefit from other vehicles' experiences.
arXiv Detail & Related papers (2024-10-03T14:04:08Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - VREM-FL: Mobility-Aware Computation-Scheduling Co-Design for Vehicular Federated Learning [2.6322811557798746]
vehicular radio environment map federated learning (VREM-FL) is proposed.
It combines mobility of vehicles with 5G radio environment maps.
VREM-FL can be tuned to trade training time for radio resource usage.
arXiv Detail & Related papers (2023-11-30T17:38:54Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Transfer Learning in Multi-Agent Reinforcement Learning with Double
Q-Networks for Distributed Resource Sharing in V2X Communication [24.442174952832108]
This paper addresses the problem of decentralized spectrum sharing in vehicle-to-everything (V2X) communication networks.
The aim is to provide resource-efficient coexistence of vehicle-to-infrastructure(V2I) and vehicle-to-vehicle(V2V) links.
arXiv Detail & Related papers (2021-07-13T15:50:10Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - Distributed Conditional Generative Adversarial Networks (GANs) for
Data-Driven Millimeter Wave Communications in UAV Networks [116.94802388688653]
A novel framework is proposed to perform data-driven air-to-ground (A2G) channel estimation for millimeter wave (mmWave) communications in an unmanned aerial vehicle (UAV) wireless network.
An effective channel estimation approach is developed, allowing each UAV to train a stand-alone channel model via a conditional generative adversarial network (CGAN) along each beamforming direction.
A cooperative framework, based on a distributed CGAN architecture, is developed, allowing each UAV to collaboratively learn the mmWave channel distribution.
arXiv Detail & Related papers (2021-02-02T20:56:46Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z) - Multi-Agent Reinforcement Learning for Channel Assignment and Power
Allocation in Platoon-Based C-V2X Systems [15.511438222357489]
We consider the problem of joint channel assignment and power allocation in underlaid cellular vehicular-to-everything (C-V2X) systems.
Our proposed distributed resource allocation algorithm provides a close performance compared to that of the well-known exhaustive search algorithm.
arXiv Detail & Related papers (2020-11-09T16:55:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.