An Agile Adaptation Method for Multi-mode Vehicle Communication Networks
- URL: http://arxiv.org/abs/2408.01429v1
- Date: Thu, 18 Jul 2024 13:04:34 GMT
- Title: An Agile Adaptation Method for Multi-mode Vehicle Communication Networks
- Authors: Shiwen He, Kanghong Chen, Shiyue Huang, Wei Huang, Zhenyu An,
- Abstract summary: Decision process and reinforcement learning are applied to establish an agile adaptation mechanism.
Q-learning is used to train the agile adaptation reinforcement learning model and output the trained model.
- Score: 9.632025797373158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on discovering the impact of communication mode allocation on communication efficiency in the vehicle communication networks. To be specific, Markov decision process and reinforcement learning are applied to establish an agile adaptation mechanism for multi-mode communication devices according to the driving scenarios and business requirements. Then, Q-learning is used to train the agile adaptation reinforcement learning model and output the trained model. By learning the best actions to take in different states to maximize the cumulative reward, and avoiding the problem of poor adaptation effect caused by inaccurate delay measurement in unstable communication scenarios. The experiments show that the proposed scheme can quickly adapt to dynamic vehicle networking environment, while achieving high concurrency and communication efficiency.
Related papers
- Semantic Communication-Enhanced Split Federated Learning for Vehicular Networks: Architecture, Challenges, and Case Study [50.345531105285524]
Vehicular edge intelligence (VEI) is vital for future intelligent transportation systems.<n>Traditional centralized learning in dynamic vehicular networks faces significant communication overhead and privacy risks.<n>This paper presents a semantic communication-enhanced split federated learning (SC-USFL) framework.
arXiv Detail & Related papers (2026-03-05T08:36:49Z) - Multi-Agent Reinforcement Learning with Communication-Constrained Priors [22.124940712335434]
Communication is one of the effective means to improve the learning of cooperative policy in multi-agent systems.<n>Existing multi-agent reinforcement learning with communication struggles to apply to complex and dynamic real-world environments.<n>We introduce a communication-constrained multi-agent reinforcement learning framework, quantifying the impact of communication messages into the global reward.
arXiv Detail & Related papers (2025-12-03T07:35:07Z) - Efficient Onboard Vision-Language Inference in UAV-Enabled Low-Altitude Economy Networks via LLM-Enhanced Optimization [61.55616421408666]
Low-Altitude Economy Networks (LAENets) have enabled a variety of applications, including aerial surveillance, environmental sensing, and semantic data collection.<n> onboard vision (VLMs) offer inference for real-time inference but limited onboard dynamic network conditions.<n>We propose a UAV-enabled LAENet system that improves communication efficiency under dynamic LAENet conditions.
arXiv Detail & Related papers (2025-10-11T05:11:21Z) - Automated Vehicles Should be Connected with Natural Language [10.579888130257185]
Multi-agent collaborative driving promises improvements in traffic safety and efficiency through collective perception and decision making.<n>Existing communication media suffer limitations in bandwidth efficiency, information completeness, and agent interoperability.<n>We argue that addressing these challenges requires a transition from purely perception-oriented data exchanges to explicit intent and reasoning communication using natural language.
arXiv Detail & Related papers (2025-06-29T16:41:19Z) - Multi-Modal Self-Supervised Semantic Communication [52.76990720898666]
We propose a multi-modal semantic communication system that leverages multi-modal self-supervised learning to enhance task-agnostic feature extraction.
The proposed approach effectively captures both modality-invariant and modality-specific features while minimizing training-related communication overhead.
The findings underscore the advantages of multi-modal self-supervised learning in semantic communication, paving the way for more efficient and scalable edge inference systems.
arXiv Detail & Related papers (2025-03-18T06:13:02Z) - DRL-Based Optimization for AoI and Energy Consumption in C-V2X Enabled IoV [33.32647734550201]
This paper analyzes the effects of multi-priority queues and NOMA on Age of Information in the C-V2X vehicular communication system.
The proposed approach demonstrates its advances in terms of energy consumption and AoI.
arXiv Detail & Related papers (2024-11-20T07:59:35Z) - Spectrum Sharing using Deep Reinforcement Learning in Vehicular Networks [0.14999444543328289]
The paper presents a few results and analyses, demonstrating the efficacy of the DQN model in enhancing spectrum sharing efficiency.
Both SARL and MARL models have exhibited successful rates of V2V communication, with the cumulative reward of the RL model reaching its maximum as training progresses.
arXiv Detail & Related papers (2024-10-16T12:59:59Z) - Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - Real-Time Network-Level Traffic Signal Control: An Explicit Multiagent
Coordination Method [9.761657423863706]
Efficient traffic signal control (TSC) has been one of the most useful ways for reducing urban road congestion.
Recent efforts that applied reinforcement learning (RL) methods can query policies by mapping the traffic state to the signal decision in real-time.
We propose an explicit multiagent coordination (EMC)-based online planning methods that can satisfy adaptive, real-time and network-level TSC.
arXiv Detail & Related papers (2023-06-15T04:08:09Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Federated Reinforcement Learning at the Edge [1.4271989597349055]
Modern cyber-physical architectures use data collected from systems at different physical locations to learn appropriate behaviors and adapt to uncertain environments.
This paper considers a setup where multiple agents need to communicate efficiently in order to jointly solve a reinforcement learning problem over time-series data collected in a distributed manner.
An algorithm for achieving communication efficiency is proposed, supported with theoretical guarantees, practical implementations, and numerical evaluations.
arXiv Detail & Related papers (2021-12-11T03:28:59Z) - Offline Contextual Bandits for Wireless Network Optimization [107.24086150482843]
In this paper, we investigate how to learn policies that can automatically adjust the configuration parameters of every cell in the network in response to the changes in the user demand.
Our solution combines existent methods for offline learning and adapts them in a principled way to overcome crucial challenges arising in this context.
arXiv Detail & Related papers (2021-11-11T11:31:20Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Deep reinforcement learning of event-triggered communication and control
for multi-agent cooperative transport [9.891241465396098]
We explore a multi-agent reinforcement learning approach to address the design problem of communication and control strategies for cooperative transport.
Our framework exploits event-triggered architecture, namely, a feedback controller that computes the communication input and a triggering mechanism that determines when the input has to be updated again.
arXiv Detail & Related papers (2021-03-29T01:16:12Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.