Dynamic Collaborative Multi-Agent Reinforcement Learning Communication
for Autonomous Drone Reforestation
- URL: http://arxiv.org/abs/2211.15414v1
- Date: Mon, 14 Nov 2022 13:25:22 GMT
- Title: Dynamic Collaborative Multi-Agent Reinforcement Learning Communication
for Autonomous Drone Reforestation
- Authors: Philipp Dominic Siedler
- Abstract summary: We approach autonomous drone-based reforestation with a collaborative multi-agent reinforcement learning (MARL) setup.
Agents can communicate as part of a dynamically changing network.
Results show how communication enables collaboration and increases collective performance, planting precision and the risk-taking propensity of individual agents.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We approach autonomous drone-based reforestation with a collaborative
multi-agent reinforcement learning (MARL) setup. Agents can communicate as part
of a dynamically changing network. We explore collaboration and communication
on the back of a high-impact problem. Forests are the main resource to control
rising CO2 conditions. Unfortunately, the global forest volume is decreasing at
an unprecedented rate. Many areas are too large and hard to traverse to plant
new trees. To efficiently cover as much area as possible, here we propose a
Graph Neural Network (GNN) based communication mechanism that enables
collaboration. Agents can share location information on areas needing
reforestation, which increases viewed area and planted tree count. We compare
our proposed communication mechanism with a multi-agent baseline without the
ability to communicate. Results show how communication enables collaboration
and increases collective performance, planting precision and the risk-taking
propensity of individual agents.
Related papers
- Networked Agents in the Dark: Team Value Learning under Partial Observability [3.8779763612314633]
We propose a novel cooperative multi-agent reinforcement learning (MARL) approach for networked agents.
In contrast to previous methods that rely on complete state information or joint observations, our agents must learn how to reach shared objectives under partial observability.
During training, they collect individual rewards and approximate a team value function through local communication, resulting in cooperative behavior.
arXiv Detail & Related papers (2025-01-15T13:01:32Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Learning to Communicate and Collaborate in a Competitive Multi-Agent Setup to Clean the Ocean from Macroplastics [0.0]
We propose a Graph Neural Network (GNN) based communication mechanism that increases the agents' observation space.
While the goal of the agent collective is to clean up as much as possible, agents are rewarded for the individual amount of macroplastics collected.
We compare our proposed communication mechanism with a multi-agent baseline without the ability to communicate.
arXiv Detail & Related papers (2023-04-12T14:02:42Z) - Collaborative Auto-Curricula Multi-Agent Reinforcement Learning with
Graph Neural Network Communication Layer for Open-ended Wildfire-Management
Resource Distribution [0.0]
We build on a recently proposed Multi-Agent Reinforcement Learning (MARL) mechanism with a Graph Neural Network (GNN) communication layer.
We conduct our study in the context of resource distribution for wildfire management.
Our MA communication proposal outperforms a Greedy Heuristic Baseline and a Single-Agent (SA) setup.
arXiv Detail & Related papers (2022-04-24T20:13:30Z) - PooL: Pheromone-inspired Communication Framework forLarge Scale
Multi-Agent Reinforcement Learning [0.0]
textbfPooL is an indirect communication framework applied to large scale multi-agent reinforcement textbfl.
PooL uses the release and utilization mechanism of pheromones to control large-scale agent coordination.
PooL can capture effective information and achieve higher rewards than other state-of-arts methods with lower communication costs.
arXiv Detail & Related papers (2022-02-20T03:09:53Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via
Learned Messaging [14.960795846548029]
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation learning abilities of deep neural networks.
This paper considers the case where there is a single, powerful, central agent that can observe the entire observation space, and there are multiple, low powered, local agents that can only receive local observations and cannot communicate with each other.
The job of the central agent is to learn what message to send to different local agents, based on the global observations, but by determining what additional information an individual agent should receive so that it can make a better decision.
arXiv Detail & Related papers (2021-01-18T19:00:12Z) - A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning [54.55119659523629]
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
arXiv Detail & Related papers (2020-10-15T14:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.