Deep reinforcement learning of event-triggered communication and control
for multi-agent cooperative transport
- URL: http://arxiv.org/abs/2103.15260v1
- Date: Mon, 29 Mar 2021 01:16:12 GMT
- Title: Deep reinforcement learning of event-triggered communication and control
for multi-agent cooperative transport
- Authors: Kazuki Shibata, Tomohiko Jimbo and Takamitsu Matsubara
- Abstract summary: We explore a multi-agent reinforcement learning approach to address the design problem of communication and control strategies for cooperative transport.
Our framework exploits event-triggered architecture, namely, a feedback controller that computes the communication input and a triggering mechanism that determines when the input has to be updated again.
- Score: 9.891241465396098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore a multi-agent reinforcement learning approach to
address the design problem of communication and control strategies for
multi-agent cooperative transport. Typical end-to-end deep neural network
policies may be insufficient for covering communication and control; these
methods cannot decide the timing of communication and can only work with
fixed-rate communications. Therefore, our framework exploits event-triggered
architecture, namely, a feedback controller that computes the communication
input and a triggering mechanism that determines when the input has to be
updated again. Such event-triggered control policies are efficiently optimized
using a multi-agent deep deterministic policy gradient. We confirmed that our
approach could balance the transport performance and communication savings
through numerical simulations.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - Effective Communication with Dynamic Feature Compression [25.150266946722]
We study a prototypal system in which an observer must communicate its sensory data to a robot controlling a task.
We consider an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
We tested the proposed approach on the well-known CartPole reference control problem, obtaining a significant performance increase.
arXiv Detail & Related papers (2024-01-29T15:35:05Z) - Emergent Communication Protocol Learning for Task Offloading in
Industrial Internet of Things [30.146175299047325]
We learn a computation offloading decision and multichannel access policy with corresponding signaling.
Specifically, the base station and industrial Internet of Things mobile devices are reinforcement learning agents.
We adopt an emergent communication protocol learning framework to solve this problem.
arXiv Detail & Related papers (2024-01-23T17:06:13Z) - Will 6G be Semantic Communications? Opportunities and Challenges from
Task Oriented and Secure Communications to Integrated Sensing [49.83882366499547]
This paper explores opportunities and challenges of task (goal)-oriented and semantic communications for next-generation (NextG) networks through the integration of multi-task learning.
We employ deep neural networks representing a dedicated encoder at the transmitter and multiple task-specific decoders at the receiver.
We scrutinize potential vulnerabilities stemming from adversarial attacks during both training and testing phases.
arXiv Detail & Related papers (2024-01-03T04:01:20Z) - Multi-Agent Reinforcement Learning for Pragmatic Communication and
Control [40.11766545693947]
We propose a joint design that combines goal-oriented communication and networked control into a single optimization model.
Joint training of the communication and control systems can significantly improve the overall performance.
arXiv Detail & Related papers (2023-02-28T08:30:24Z) - Semantic and Effective Communication for Remote Control Tasks with
Dynamic Feature Compression [23.36744348465991]
Coordination of robotic swarms and the remote wireless control of industrial systems are among the major use cases for 5G and beyond systems.
In this work, we consider a prototypal system in which an observer must communicate its sensory data to an actor controlling a task.
We propose an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
arXiv Detail & Related papers (2023-01-14T11:43:56Z) - Accelerating Federated Edge Learning via Optimized Probabilistic Device
Scheduling [57.271494741212166]
This paper formulates and solves the communication time minimization problem.
It is found that the optimized policy gradually turns its priority from suppressing the remaining communication rounds to reducing per-round latency as the training process evolves.
The effectiveness of the proposed scheme is demonstrated via a use case on collaborative 3D objective detection in autonomous driving.
arXiv Detail & Related papers (2021-07-24T11:39:17Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Learning Event-triggered Control from Data through Joint Optimization [7.391641422048646]
We present a framework for model-free learning of event-triggered control strategies.
We propose a novel algorithm based on hierarchical reinforcement learning.
The resulting algorithm is shown to accomplish high-performance control in line with resource savings and scales seamlessly to nonlinear and high-dimensional systems.
arXiv Detail & Related papers (2020-08-11T14:15:38Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.