Resource-Aware Distributed Submodular Maximization: A Paradigm for
Multi-Robot Decision-Making
- URL: http://arxiv.org/abs/2204.07520v1
- Date: Fri, 15 Apr 2022 15:47:05 GMT
- Title: Resource-Aware Distributed Submodular Maximization: A Paradigm for
Multi-Robot Decision-Making
- Authors: Zirui Xu, Vasileios Tzoumas
- Abstract summary: Resource-Aware distributed Greedy is the first algorithm to account for each robot's on-board resources independently.
RAG balances the trade-off of centralization, for global near-optimality, vs. decentralization, for near-minimal on-board computation, communication, and memory resources.
- Score: 3.5788754401889022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the first algorithm for distributed decision-making that
provably balances the trade-off of centralization, for global near-optimality,
vs. decentralization, for near-minimal on-board computation, communication, and
memory resources. We are motivated by the future of autonomy that involves
heterogeneous robots collaborating in complex~tasks, such as image covering,
target tracking, and area monitoring. Current algorithms, such as consensus
algorithms, are insufficient to fulfill this future: they achieve distributed
communication only, at the expense of high communication, computation, and
memory overloads. A shift to resource-aware algorithms is needed, that can
account for each robot's on-board resources, independently. We provide the
first resource-aware algorithm, Resource-Aware distributed Greedy (RAG). We
focus on maximization problems involving monotone and "doubly" submodular
functions, a diminishing returns property. RAG has near-minimal on-board
resource requirements. Each agent can afford to run the algorithm by adjusting
the size of its neighborhood, even if that means selecting actions in complete
isolation. RAG has provable approximation performance, where each agent can
independently determine its contribution. All in all, RAG is the first
algorithm to quantify the trade-off of centralization, for global
near-optimality, vs. decentralization, for near-minimal on-board resource
requirements. To capture the trade-off, we introduce the notion of
Centralization Of Information among non-Neighbors (COIN). We validate RAG in
simulated scenarios of image covering with mobile robots.
Related papers
- Distributed Online Rollout for Multivehicle Routing in Unmapped
Environments [0.8437187555622164]
We present a fully distributed, online, and scalable reinforcement learning algorithm for the well-known multivehicle routing problem.
Agents self-organize into local clusters and independently apply a multiagent rollout scheme locally to each cluster.
Our algorithm achieves approximately a factor of two cost improvement over the base policy for a range of radii bounded from below and above by two and three times the critical sensing radius, respectively.
arXiv Detail & Related papers (2023-05-24T22:06:44Z) - Centralizing State-Values in Dueling Networks for Multi-Robot
Reinforcement Learning Mapless Navigation [87.85646257351212]
We study the problem of multi-robot mapless navigation in the popular Training and Decentralized Execution (CTDE) paradigm.
This problem is challenging when each robot considers its path without explicitly sharing observations with other robots.
We propose a novel architecture for CTDE that uses a centralized state-value network to compute a joint state-value.
arXiv Detail & Related papers (2021-12-16T16:47:00Z) - DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
Nonconvex Finite-Sum Optimization [43.31016937305845]
Internet-of-things, networked sensing, autonomous systems and federated learning call for decentralized algorithms for finite-sum optimizations.
We develop DEcentralized STochastic REcurSive methodDESTRESS for non finite-sum optimization.
Detailed theoretical and numerical comparisons show that DESTRESS improves upon prior decentralized algorithms.
arXiv Detail & Related papers (2021-10-04T03:17:41Z) - Graph Neural Networks for Decentralized Multi-Robot Submodular Action
Selection [101.38634057635373]
We focus on applications where robots are required to jointly select actions to maximize team submodular objectives.
We propose a general-purpose learning architecture towards submodular at scale, with decentralized communications.
We demonstrate the performance of our GNN-based learning approach in a scenario of active target coverage with large networks of robots.
arXiv Detail & Related papers (2021-05-18T15:32:07Z) - Multi-resource allocation for federated settings: A non-homogeneous
Markov chain model [2.552459629685159]
In a federated setting, agents coordinate with a central agent or a server to solve an optimization problem in which agents do not share their information with each other.
We describe how the basic additive-increase multiplicative-decrease (AIMD) algorithm can be modified in a straightforward manner to solve a class of optimization problems for federated settings for a single shared resource with no inter-agent communication.
We extend the single-resource algorithm to multiple heterogeneous shared resources that emerge in smart cities, sharing economy, and many other applications.
arXiv Detail & Related papers (2021-04-26T19:10:00Z) - Resource allocation in dynamic multiagent systems [0.0]
The MG-RAO algorithm is developed to solve resource allocation problems in multi-agent systems.
It shows a 23 - 28% improvement over fixed resource allocation in the simulated environments.
Results also show that, in a volatile system, using the MG-RAO algorithm configured so that child agents model resource allocation for all agents as a whole has 46.5% of the performance of when it is set to model multiple groups of agents.
arXiv Detail & Related papers (2021-02-16T17:56:23Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Multi-Agent Deep Reinforcement Learning enabled Computation Resource
Allocation in a Vehicular Cloud Network [30.736512922808362]
We investigate the computational resource allocation problem in a distributed Ad-Hoc vehicular network with no centralized infrastructure support.
To overcome the dilemma of lacking a real central control unit in VCN, the allocation is completed on the vehicles in a distributed manner.
arXiv Detail & Related papers (2020-08-14T17:02:24Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.