Distributed Transmission Control for Wireless Networks using Multi-Agent
Reinforcement Learning
- URL: http://arxiv.org/abs/2205.06800v1
- Date: Fri, 13 May 2022 17:53:00 GMT
- Title: Distributed Transmission Control for Wireless Networks using Multi-Agent
Reinforcement Learning
- Authors: Collin Farquhar, Prem Sagar Pattanshetty Vasanth Kumar, Anu Jagannath,
Jithin Jagannath
- Abstract summary: We study the problem of transmission control through the lens of multi-agent reinforcement learning.
We achieve this collaborative behavior through studying the effects of different actions spaces.
We submit that approaches similar to ours may be useful in other domains that use multi-agent reinforcement learning with independent agents.
- Score: 0.9176056742068812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine the problem of transmission control, i.e., when to transmit, in
distributed wireless communications networks through the lens of multi-agent
reinforcement learning. Most other works using reinforcement learning to
control or schedule transmissions use some centralized control mechanism,
whereas our approach is fully distributed. Each transmitter node is an
independent reinforcement learning agent and does not have direct knowledge of
the actions taken by other agents. We consider the case where only a subset of
agents can successfully transmit at a time, so each agent must learn to act
cooperatively with other agents. An agent may decide to transmit a certain
number of steps into the future, but this decision is not communicated to the
other agents, so it the task of the individual agents to attempt to transmit at
appropriate times. We achieve this collaborative behavior through studying the
effects of different actions spaces. We are agnostic to the physical layer,
which makes our approach applicable to many types of networks. We submit that
approaches similar to ours may be useful in other domains that use multi-agent
reinforcement learning with independent agents.
Related papers
- Multi-agent assignment via state augmented reinforcement learning [3.4992411324493515]
We address the conflicting requirements of a multi-agent assignment problem through constrained reinforcement learning.
We recur to a state augmentation approach in which the oscillation of dual variables is exploited by agents to alternate between tasks.
arXiv Detail & Related papers (2024-06-03T20:56:12Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z) - Consensus Learning for Cooperative Multi-Agent Reinforcement Learning [12.74348597962689]
We propose consensus learning for cooperative multi-agent reinforcement learning.
We feed the inferred consensus as an explicit input to the network of agents.
Our proposed method can be extended to various multi-agent reinforcement learning algorithms.
arXiv Detail & Related papers (2022-06-06T12:43:07Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Explaining Reinforcement Learning Policies through Counterfactual
Trajectories [147.7246109100945]
A human developer must validate that an RL agent will perform well at test-time.
Our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution.
In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks.
arXiv Detail & Related papers (2022-01-29T00:52:37Z) - HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via
Learned Messaging [14.960795846548029]
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation learning abilities of deep neural networks.
This paper considers the case where there is a single, powerful, central agent that can observe the entire observation space, and there are multiple, low powered, local agents that can only receive local observations and cannot communicate with each other.
The job of the central agent is to learn what message to send to different local agents, based on the global observations, but by determining what additional information an individual agent should receive so that it can make a better decision.
arXiv Detail & Related papers (2021-01-18T19:00:12Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.