Hybrid Information-driven Multi-agent Reinforcement Learning
- URL: http://arxiv.org/abs/2102.01004v1
- Date: Mon, 1 Feb 2021 17:28:39 GMT
- Title: Hybrid Information-driven Multi-agent Reinforcement Learning
- Authors: William A. Dawson, Ruben Glatt, Edward Rusu, Braden C. Soper, Ryan A.
Goldhahn
- Abstract summary: Information theoretic sensor management approaches are too intensive for large state spaces.
Reinforcement learning is a promising alternative which can find approximate solutions to distributed optimal control problems.
We propose a hybrid information-driven multi-agent reinforcement learning approach.
- Score: 3.7011129410662553
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Information theoretic sensor management approaches are an ideal solution to
state estimation problems when considering the optimal control of multi-agent
systems, however they are too computationally intensive for large state spaces,
especially when considering the limited computational resources typical of
large-scale distributed multi-agent systems. Reinforcement learning (RL) is a
promising alternative which can find approximate solutions to distributed
optimal control problems that take into account the resource constraints
inherent in many systems of distributed agents. However, the RL training can be
prohibitively inefficient, especially in low-information environments where
agents receive little to no feedback in large portions of the state space. We
propose a hybrid information-driven multi-agent reinforcement learning (MARL)
approach that utilizes information theoretic models as heuristics to help the
agents navigate large sparse state spaces, coupled with information based
rewards in an RL framework to learn higher-level policies. This paper presents
our ongoing work towards this objective. Our preliminary findings show that
such an approach can result in a system of agents that are approximately three
orders of magnitude more efficient at exploring a sparse state space than naive
baseline metrics. While the work is still in its early stages, it provides a
promising direction for future research.
Related papers
- Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Evolving Populations of Diverse RL Agents with MAP-Elites [1.5575376673936223]
We introduce a flexible framework that allows the use of any Reinforcement Learning (RL) algorithm instead of just policies.
We demonstrate the benefits brought about by our framework through extensive numerical experiments on a number of robotics control problems.
arXiv Detail & Related papers (2023-03-09T19:05:45Z) - A Survey on Large-Population Systems and Scalable Multi-Agent
Reinforcement Learning [18.918558716102144]
We will shed light on current approaches to tractably understanding and analyzing large-population systems.
We will survey potential areas of application for large-scale control and identify fruitful future applications of learning algorithms in practical systems.
arXiv Detail & Related papers (2022-09-08T14:58:50Z) - PooL: Pheromone-inspired Communication Framework forLarge Scale
Multi-Agent Reinforcement Learning [0.0]
textbfPooL is an indirect communication framework applied to large scale multi-agent reinforcement textbfl.
PooL uses the release and utilization mechanism of pheromones to control large-scale agent coordination.
PooL can capture effective information and achieve higher rewards than other state-of-arts methods with lower communication costs.
arXiv Detail & Related papers (2022-02-20T03:09:53Z) - Relative Distributed Formation and Obstacle Avoidance with Multi-agent
Reinforcement Learning [20.401609420707867]
We propose a distributed formation and obstacle avoidance method based on multi-agent reinforcement learning (MARL)
Our method achieves better performance regarding formation error, formation convergence rate and on-par success rate of obstacle avoidance compared with baselines.
arXiv Detail & Related papers (2021-11-14T13:02:45Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Dynamics Generalization via Information Bottleneck in Deep Reinforcement
Learning [90.93035276307239]
We propose an information theoretic regularization objective and an annealing-based optimization method to achieve better generalization ability in RL agents.
We demonstrate the extreme generalization benefits of our approach in different domains ranging from maze navigation to robotic tasks.
This work provides a principled way to improve generalization in RL by gradually removing information that is redundant for task-solving.
arXiv Detail & Related papers (2020-08-03T02:24:20Z) - A Survey of Reinforcement Learning Algorithms for Dynamically Varying
Environments [1.713291434132985]
Reinforcement learning (RL) algorithms find applications in inventory control, recommender systems, vehicular traffic management, cloud computing and robotics.
Real-world complications of many tasks arising in these domains makes them difficult to solve with the basic assumptions underlying classical RL algorithms.
This paper provides a survey of RL methods developed for handling dynamically varying environment models.
A representative collection of these algorithms is discussed in detail in this work along with their categorization and their relative merits and demerits.
arXiv Detail & Related papers (2020-05-19T09:42:42Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.