Resource Management in Wireless Networks via Multi-Agent Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2002.06215v2
- Date: Mon, 11 Jan 2021 06:04:45 GMT
- Title: Resource Management in Wireless Networks via Multi-Agent Deep
Reinforcement Learning
- Authors: Navid Naderializadeh, Jaroslaw Sydir, Meryem Simsek, Hosein Nikopour
- Abstract summary: We propose a mechanism for distributed resource management and interference mitigation in wireless networks using multi-agent deep reinforcement learning (RL)
We equip each transmitter in the network with a deep RL agent that receives delayed observations from its associated users, while also exchanging observations with its neighboring agents.
Our proposed framework enables agents to make decisions simultaneously and in a distributed manner, unaware of the concurrent decisions of other agents.
- Score: 15.091308167639815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a mechanism for distributed resource management and interference
mitigation in wireless networks using multi-agent deep reinforcement learning
(RL). We equip each transmitter in the network with a deep RL agent that
receives delayed observations from its associated users, while also exchanging
observations with its neighboring agents, and decides on which user to serve
and what transmit power to use at each scheduling interval. Our proposed
framework enables agents to make decisions simultaneously and in a distributed
manner, unaware of the concurrent decisions of other agents. Moreover, our
design of the agents' observation and action spaces is scalable, in the sense
that an agent trained on a scenario with a specific number of transmitters and
users can be applied to scenarios with different numbers of transmitters and/or
users. Simulation results demonstrate the superiority of our proposed approach
compared to decentralized baselines in terms of the tradeoff between average
and $5^{th}$ percentile user rates, while achieving performance close to, and
even in certain cases outperforming, that of a centralized
information-theoretic baseline. We also show that our trained agents are robust
and maintain their performance gains when experiencing mismatches between train
and test deployments.
Related papers
- Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Quantifying Agent Interaction in Multi-agent Reinforcement Learning for
Cost-efficient Generalization [63.554226552130054]
Generalization poses a significant challenge in Multi-agent Reinforcement Learning (MARL)
The extent to which an agent is influenced by unseen co-players depends on the agent's policy and the specific scenario.
We present the Level of Influence (LoI), a metric quantifying the interaction intensity among agents within a given scenario and environment.
arXiv Detail & Related papers (2023-10-11T06:09:26Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Distributed Transmission Control for Wireless Networks using Multi-Agent
Reinforcement Learning [0.9176056742068812]
We study the problem of transmission control through the lens of multi-agent reinforcement learning.
We achieve this collaborative behavior through studying the effects of different actions spaces.
We submit that approaches similar to ours may be useful in other domains that use multi-agent reinforcement learning with independent agents.
arXiv Detail & Related papers (2022-05-13T17:53:00Z) - Robust Event-Driven Interactions in Cooperative Multi-Agent Learning [0.0]
We present an approach to reduce the communication required between agents in a Multi-Agent learning system by exploiting the inherent robustness of the underlying Markov Decision Process.
We compute so-called robustness surrogate functions (off-line), that give agents a conservative indication of how far their state measurements can deviate before they need to update other agents in the system.
This results in fully distributed decision functions, enabling agents to decide when it is necessary to update others.
arXiv Detail & Related papers (2022-04-07T11:00:39Z) - Explaining Reinforcement Learning Policies through Counterfactual
Trajectories [147.7246109100945]
A human developer must validate that an RL agent will perform well at test-time.
Our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution.
In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks.
arXiv Detail & Related papers (2022-01-29T00:52:37Z) - Multi-Agent Adversarial Attacks for Multi-Channel Communications [24.576538640840976]
We propose a multi-agent adversary system (MAAS) for modeling and analyzing adversaries in a wireless communication scenario.
By modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) and their respective allocated power(s) without any prior knowledge of the sender strategy.
arXiv Detail & Related papers (2022-01-22T23:57:00Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - AoI-Aware Resource Allocation for Platoon-Based C-V2X Networks via
Multi-Agent Multi-Task Reinforcement Learning [22.890835786710316]
This paper investigates the problem of age of information (AoI) aware radio resource management for a platooning system.
Multiple autonomous platoons exploit the cellular wireless vehicle-to-everything (C-V2X) communication technology to disseminate the cooperative awareness messages (CAMs) to their followers.
We exploit a distributed resource allocation framework based on multi-agent reinforcement learning (MARL), where each platoon leader (PL) acts as an agent and interacts with the environment to learn its optimal policy.
arXiv Detail & Related papers (2021-05-10T08:39:56Z) - Team Deep Mixture of Experts for Distributed Power Control [23.612400109629544]
We propose an architecture inspired from the well-known Mixture of Experts (MoE) model.
We show the ability of the so called Team-DMoE model to efficiently track time-varying statistical scenarios.
arXiv Detail & Related papers (2020-07-28T12:01:06Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.