Multi-Agent Reinforcement Learning for Pragmatic Communication and
Control
- URL: http://arxiv.org/abs/2302.14399v1
- Date: Tue, 28 Feb 2023 08:30:24 GMT
- Title: Multi-Agent Reinforcement Learning for Pragmatic Communication and
Control
- Authors: Federico Mason and Federico Chiariotti and Andrea Zanella and Petar
Popovski
- Abstract summary: We propose a joint design that combines goal-oriented communication and networked control into a single optimization model.
Joint training of the communication and control systems can significantly improve the overall performance.
- Score: 40.11766545693947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automation of factories and manufacturing processes has been accelerating
over the past few years, boosted by the Industry 4.0 paradigm, including
diverse scenarios with mobile, flexible agents. Efficient coordination between
mobile robots requires reliable wireless transmission in highly dynamic
environments, often with strict timing requirements. Goal-oriented
communication is a possible solution for this problem: communication decisions
should be optimized for the target control task, providing the information that
is most relevant to decide which action to take. From the control perspective,
networked control design takes the communication impairments into account in
its optmization of physical actions. In this work, we propose a joint design
that combines goal-oriented communication and networked control into a single
optimization model, an extension of a multiagent POMDP which we call
Cyber-Physical POMDP (CP-POMDP). The model is flexible enough to represent
several swarm and cooperative scenarios, and we illustrate its potential with
two simple reference scenarios with a single agent and a set of supporting
sensors. Joint training of the communication and control systems can
significantly improve the overall performance, particularly if communication is
severely constrained, and can even lead to implicit coordination of
communication actions.
Related papers
- Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models [41.95288786980204]
Current agent frameworks often suffer from dependencies on single-agent execution and lack robust inter- module communication.
We present a framework for training large language models as collaborative agents to enable coordinated behaviors in cooperative MARL.
A propagation network transforms broadcast intentions into teammate-specific communication messages, sharing relevant goals with designated teammates.
arXiv Detail & Related papers (2024-07-17T13:14:00Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Effective Communication with Dynamic Feature Compression [25.150266946722]
We study a prototypal system in which an observer must communicate its sensory data to a robot controlling a task.
We consider an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
We tested the proposed approach on the well-known CartPole reference control problem, obtaining a significant performance increase.
arXiv Detail & Related papers (2024-01-29T15:35:05Z) - Pragmatic Communication in Multi-Agent Collaborative Perception [80.14322755297788]
Collaborative perception results in a trade-off between perception ability and communication costs.
We propose PragComm, a multi-agent collaborative perception system with two key components.
PragComm consistently outperforms previous methods with more than 32.7K times lower communication volume.
arXiv Detail & Related papers (2024-01-23T11:58:08Z) - Push- and Pull-based Effective Communication in Cyber-Physical Systems [15.079887992932692]
We propose an analytical model for push- and pull-based communication in CPSs, observing that the policy optimality coincides with Cyber Value Information (VoI) state.
Our results also highlight that, despite providing a better optimal solution, implementable push-based communication strategies may underperform even in relatively simple scenarios.
arXiv Detail & Related papers (2024-01-15T10:06:17Z) - Large Language Model Enhanced Multi-Agent Systems for 6G Communications [94.45712802626794]
We propose a multi-agent system with customized communication knowledge and tools for solving communication related tasks using natural language.
We validate the effectiveness of the proposed multi-agent system by designing a semantic communication system.
arXiv Detail & Related papers (2023-12-13T02:35:57Z) - Asynchronous Perception-Action-Communication with Graph Neural Networks [93.58250297774728]
Collaboration in large robot swarms to achieve a common global objective is a challenging problem in large environments.
The robots must execute a Perception-Action-Communication loop -- they perceive their local environment, communicate with other robots, and take actions in real time.
Recently, this has been addressed using Graph Neural Networks (GNNs) for applications such as flocking and coverage control.
This paper proposes a framework for asynchronous PAC in robot swarms, where decentralized GNNs are used to compute navigation actions and generate messages for communication.
arXiv Detail & Related papers (2023-09-18T21:20:50Z) - Semantic and Effective Communication for Remote Control Tasks with
Dynamic Feature Compression [23.36744348465991]
Coordination of robotic swarms and the remote wireless control of industrial systems are among the major use cases for 5G and beyond systems.
In this work, we consider a prototypal system in which an observer must communicate its sensory data to an actor controlling a task.
We propose an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
arXiv Detail & Related papers (2023-01-14T11:43:56Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Deep reinforcement learning of event-triggered communication and control
for multi-agent cooperative transport [9.891241465396098]
We explore a multi-agent reinforcement learning approach to address the design problem of communication and control strategies for cooperative transport.
Our framework exploits event-triggered architecture, namely, a feedback controller that computes the communication input and a triggering mechanism that determines when the input has to be updated again.
arXiv Detail & Related papers (2021-03-29T01:16:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.