Emergent Communication Protocol Learning for Task Offloading in
Industrial Internet of Things
- URL: http://arxiv.org/abs/2401.12914v1
- Date: Tue, 23 Jan 2024 17:06:13 GMT
- Title: Emergent Communication Protocol Learning for Task Offloading in
Industrial Internet of Things
- Authors: Salwa Mostafa, Mateus P. Mota, Alvaro Valcarce, and Mehdi Bennis
- Abstract summary: We learn a computation offloading decision and multichannel access policy with corresponding signaling.
Specifically, the base station and industrial Internet of Things mobile devices are reinforcement learning agents.
We adopt an emergent communication protocol learning framework to solve this problem.
- Score: 30.146175299047325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we leverage a multi-agent reinforcement learning (MARL)
framework to jointly learn a computation offloading decision and multichannel
access policy with corresponding signaling. Specifically, the base station and
industrial Internet of Things mobile devices are reinforcement learning agents
that need to cooperate to execute their computation tasks within a deadline
constraint. We adopt an emergent communication protocol learning framework to
solve this problem. The numerical results illustrate the effectiveness of
emergent communication in improving the channel access success rate and the
number of successfully computed tasks compared to contention-based,
contention-free, and no-communication approaches. Moreover, the proposed task
offloading policy outperforms remote and local computation baselines.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications [60.63472821600567]
A novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed.
The challenge of efficiently allocating communication and computing resources is addressed through the application of Stackelberg hyper game theory.
Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources.
arXiv Detail & Related papers (2024-09-26T15:55:59Z) - Emergency Computing: An Adaptive Collaborative Inference Method Based on
Hierarchical Reinforcement Learning [14.929735103723573]
We propose an Emergency Network with Sensing, Communication, Computation, Caching, and Intelligence (E-SC3I)
The framework incorporates mechanisms for emergency computing, caching, integrated communication and sensing, and intelligence empowerment.
We specifically concentrate on emergency computing and propose an adaptive collaborative inference method (ACIM) based on hierarchical reinforcement learning.
arXiv Detail & Related papers (2024-02-03T13:28:35Z) - Will 6G be Semantic Communications? Opportunities and Challenges from
Task Oriented and Secure Communications to Integrated Sensing [49.83882366499547]
This paper explores opportunities and challenges of task (goal)-oriented and semantic communications for next-generation (NextG) networks through the integration of multi-task learning.
We employ deep neural networks representing a dedicated encoder at the transmitter and multiple task-specific decoders at the receiver.
We scrutinize potential vulnerabilities stemming from adversarial attacks during both training and testing phases.
arXiv Detail & Related papers (2024-01-03T04:01:20Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Federated Reinforcement Learning at the Edge [1.4271989597349055]
Modern cyber-physical architectures use data collected from systems at different physical locations to learn appropriate behaviors and adapt to uncertain environments.
This paper considers a setup where multiple agents need to communicate efficiently in order to jointly solve a reinforcement learning problem over time-series data collected in a distributed manner.
An algorithm for achieving communication efficiency is proposed, supported with theoretical guarantees, practical implementations, and numerical evaluations.
arXiv Detail & Related papers (2021-12-11T03:28:59Z) - Event-Based Communication in Multi-Agent Distributed Q-Learning [0.0]
We present an approach to reduce the communication of information needed on a multi-agent learning system inspired by Event Triggered Control (ETC) techniques.
We consider a baseline scenario of a distributed Q-learning problem on a Markov Decision Process (MDP)
Following an event-based approach, N agents explore the MDP and communicate experiences to a central learner only when necessary, which performs updates of the actor Q functions.
arXiv Detail & Related papers (2021-09-03T10:06:53Z) - Communication-Efficient Split Learning Based on Analog Communication and
Over the Air Aggregation [48.150466900765316]
Split-learning (SL) has recently gained popularity due to its inherent privacy-preserving capabilities and ability to enable collaborative inference for devices with limited computational power.
Standard SL algorithms assume an ideal underlying digital communication system and ignore the problem of scarce communication bandwidth.
We propose a novel SL framework to solve the remote inference problem that introduces an additional layer at the agent side and constrains the choices of the weights and the biases to ensure over the air aggregation.
arXiv Detail & Related papers (2021-06-02T07:49:41Z) - Deep reinforcement learning of event-triggered communication and control
for multi-agent cooperative transport [9.891241465396098]
We explore a multi-agent reinforcement learning approach to address the design problem of communication and control strategies for cooperative transport.
Our framework exploits event-triggered architecture, namely, a feedback controller that computes the communication input and a triggering mechanism that determines when the input has to be updated again.
arXiv Detail & Related papers (2021-03-29T01:16:12Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z) - Learning to Communicate Using Counterfactual Reasoning [2.8110705488739676]
This paper introduces the novel multi-agent counterfactual communication learning (MACC) method.
MACC adapts counterfactual reasoning in order to overcome the credit assignment problem for communicating agents.
Our experiments show that MACC is able to outperform the state-of-the-art baselines in four different scenarios in the Particle environment.
arXiv Detail & Related papers (2020-06-12T14:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.