Multi-Agent Reinforcement Learning for Pragmatic Communication and
Control
- URL: http://arxiv.org/abs/2302.14399v1
- Date: Tue, 28 Feb 2023 08:30:24 GMT
- Title: Multi-Agent Reinforcement Learning for Pragmatic Communication and
Control
- Authors: Federico Mason and Federico Chiariotti and Andrea Zanella and Petar
Popovski
- Abstract summary: We propose a joint design that combines goal-oriented communication and networked control into a single optimization model.
Joint training of the communication and control systems can significantly improve the overall performance.
- Score: 40.11766545693947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automation of factories and manufacturing processes has been accelerating
over the past few years, boosted by the Industry 4.0 paradigm, including
diverse scenarios with mobile, flexible agents. Efficient coordination between
mobile robots requires reliable wireless transmission in highly dynamic
environments, often with strict timing requirements. Goal-oriented
communication is a possible solution for this problem: communication decisions
should be optimized for the target control task, providing the information that
is most relevant to decide which action to take. From the control perspective,
networked control design takes the communication impairments into account in
its optmization of physical actions. In this work, we propose a joint design
that combines goal-oriented communication and networked control into a single
optimization model, an extension of a multiagent POMDP which we call
Cyber-Physical POMDP (CP-POMDP). The model is flexible enough to represent
several swarm and cooperative scenarios, and we illustrate its potential with
two simple reference scenarios with a single agent and a set of supporting
sensors. Joint training of the communication and control systems can
significantly improve the overall performance, particularly if communication is
severely constrained, and can even lead to implicit coordination of
communication actions.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - Cooperative and Asynchronous Transformer-based Mission Planning for Heterogeneous Teams of Mobile Robots [1.1049608786515839]
This paper presents the Cooperative and Asynchronous Transformer-based Mission Planning (CATMiP) framework.
CatMiP uses multi-agent reinforcement learning to coordinate agents with heterogeneous sensing, motion, and actuation capabilities.
It easily adapts to mission complexities and communication constraints, and scales to varying environment sizes and team compositions.
arXiv Detail & Related papers (2024-10-08T21:14:09Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Effective Communication with Dynamic Feature Compression [25.150266946722]
We study a prototypal system in which an observer must communicate its sensory data to a robot controlling a task.
We consider an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
We tested the proposed approach on the well-known CartPole reference control problem, obtaining a significant performance increase.
arXiv Detail & Related papers (2024-01-29T15:35:05Z) - Push- and Pull-based Effective Communication in Cyber-Physical Systems [15.079887992932692]
We propose an analytical model for push- and pull-based communication in CPSs, observing that the policy optimality coincides with Cyber Value Information (VoI) state.
Our results also highlight that, despite providing a better optimal solution, implementable push-based communication strategies may underperform even in relatively simple scenarios.
arXiv Detail & Related papers (2024-01-15T10:06:17Z) - Asynchronous Perception-Action-Communication with Graph Neural Networks [93.58250297774728]
Collaboration in large robot swarms to achieve a common global objective is a challenging problem in large environments.
The robots must execute a Perception-Action-Communication loop -- they perceive their local environment, communicate with other robots, and take actions in real time.
Recently, this has been addressed using Graph Neural Networks (GNNs) for applications such as flocking and coverage control.
This paper proposes a framework for asynchronous PAC in robot swarms, where decentralized GNNs are used to compute navigation actions and generate messages for communication.
arXiv Detail & Related papers (2023-09-18T21:20:50Z) - Semantic and Effective Communication for Remote Control Tasks with
Dynamic Feature Compression [23.36744348465991]
Coordination of robotic swarms and the remote wireless control of industrial systems are among the major use cases for 5G and beyond systems.
In this work, we consider a prototypal system in which an observer must communicate its sensory data to an actor controlling a task.
We propose an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
arXiv Detail & Related papers (2023-01-14T11:43:56Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Deep reinforcement learning of event-triggered communication and control
for multi-agent cooperative transport [9.891241465396098]
We explore a multi-agent reinforcement learning approach to address the design problem of communication and control strategies for cooperative transport.
Our framework exploits event-triggered architecture, namely, a feedback controller that computes the communication input and a triggering mechanism that determines when the input has to be updated again.
arXiv Detail & Related papers (2021-03-29T01:16:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.