Multiagent Rollout and Policy Iteration for POMDP with Application to
Multi-Robot Repair Problems
- URL: http://arxiv.org/abs/2011.04222v1
- Date: Mon, 9 Nov 2020 06:51:50 GMT
- Title: Multiagent Rollout and Policy Iteration for POMDP with Application to
Multi-Robot Repair Problems
- Authors: Sushmita Bhattacharya, Siva Kailas, Sahil Badyal, Stephanie Gil,
Dimitri Bertsekas
- Abstract summary: We consider infinite horizon discounted dynamic programming problems with finite state and control spaces, partial state observations, and a multiagent structure.
Our methods specifically address the computational challenges of partially observable multiagent problems.
- Score: 1.6939372704265414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we consider infinite horizon discounted dynamic programming
problems with finite state and control spaces, partial state observations, and
a multiagent structure. We discuss and compare algorithms that simultaneously
or sequentially optimize the agents' controls by using multistep lookahead,
truncated rollout with a known base policy, and a terminal cost function
approximation. Our methods specifically address the computational challenges of
partially observable multiagent problems. In particular: 1) We consider rollout
algorithms that dramatically reduce required computation while preserving the
key cost improvement property of the standard rollout method. The per-step
computational requirements for our methods are on the order of $O(Cm)$ as
compared with $O(C^m)$ for standard rollout, where $C$ is the maximum
cardinality of the constraint set for the control component of each agent, and
$m$ is the number of agents. 2) We show that our methods can be applied to
challenging problems with a graph structure, including a class of robot repair
problems whereby multiple robots collaboratively inspect and repair a system
under partial information. 3) We provide a simulation study that compares our
methods with existing methods, and demonstrate that our methods can handle
larger and more complex partially observable multiagent problems (state space
size $10^{37}$ and control space size $10^{7}$, respectively). Finally, we
incorporate our multiagent rollout algorithms as building blocks in an
approximate policy iteration scheme, where successive rollout policies are
approximated by using neural network classifiers. While this scheme requires a
strictly off-line implementation, it works well in our computational
experiments and produces additional significant performance improvement over
the single online rollout iteration method.
Related papers
- Scalable Mechanism Design for Multi-Agent Path Finding [87.40027406028425]
Multi-Agent Path Finding (MAPF) involves determining paths for multiple agents to travel simultaneously and collision-free through a shared area toward given goal locations.
Finding an optimal solution is often computationally infeasible, making the use of approximate, suboptimal algorithms essential.
We introduce the problem of scalable mechanism design for MAPF and propose three strategyproof mechanisms, two of which even use approximate MAPF algorithms.
arXiv Detail & Related papers (2024-01-30T14:26:04Z) - Cooperative Thresholded Lasso for Sparse Linear Bandit [6.52540785559241]
We present a novel approach to address the multi-agent sparse contextual linear bandit problem.
It is first algorithm that tackles row-wise distributed data in sparse linear bandits.
It is widely applicable to high-dimensional multi-agent problems where efficient feature extraction is critical for minimizing regret.
arXiv Detail & Related papers (2023-05-30T16:05:44Z) - Efficient Policy Iteration for Robust Markov Decision Processes via
Regularization [49.05403412954533]
Robust decision processes (MDPs) provide a framework to model decision problems where the system dynamics are changing or only partially known.
Recent work established the equivalence between texttts rectangular $L_p$ robust MDPs and regularized MDPs, and derived a regularized policy iteration scheme that enjoys the same level of efficiency as standard MDPs.
In this work, we focus on the policy improvement step and derive concrete forms for the greedy policy and the optimal robust Bellman operators.
arXiv Detail & Related papers (2022-05-28T04:05:20Z) - An Experimental Design Perspective on Model-Based Reinforcement Learning [73.37942845983417]
In practical applications of RL, it is expensive to observe state transitions from the environment.
We propose an acquisition function that quantifies how much information a state-action pair would provide about the optimal solution to a Markov decision process.
arXiv Detail & Related papers (2021-12-09T23:13:57Z) - Implicitly Regularized RL with Implicit Q-Values [42.87920755961722]
The $Q$-function is a central quantity in many Reinforcement Learning (RL) algorithms for which RL agents behave following a (soft)-greedy policy.
We propose to parametrize the $Q$-function implicitly, as the sum of a log-policy and of a value function.
We derive a practical off-policy deep RL algorithm, suitable for large action spaces, and that enforces the softmax relation between the policy and the $Q$-value.
arXiv Detail & Related papers (2021-08-16T12:20:47Z) - Online Sub-Sampling for Reinforcement Learning with General Function
Approximation [111.01990889581243]
In this paper, we establish an efficient online sub-sampling framework that measures the information gain of data points collected by an RL algorithm.
For a value-based method with complexity-bounded function class, we show that the policy only needs to be updated for $proptooperatornamepolylog(K)$ times.
In contrast to existing approaches that update the policy for at least $Omega(K)$ times, our approach drastically reduces the number of optimization calls in solving for a policy.
arXiv Detail & Related papers (2021-06-14T07:36:25Z) - Online Apprenticeship Learning [58.45089581278177]
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function.
The goal is to find a policy that matches the expert's performance on some predefined set of cost functions.
We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms.
arXiv Detail & Related papers (2021-02-13T12:57:51Z) - Queueing Network Controls via Deep Reinforcement Learning [0.0]
We develop a Proximal policy optimization algorithm for queueing networks.
The algorithm consistently generates control policies that outperform state-of-arts in literature.
A key to the successes of our PPO algorithm is the use of three variance reduction techniques in estimating the relative value function.
arXiv Detail & Related papers (2020-07-31T01:02:57Z) - FANOK: Knockoffs in Linear Time [73.5154025911318]
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
We test our methods on problems with $p$ as large as $500,000$.
arXiv Detail & Related papers (2020-06-15T21:55:34Z) - Constrained Multiagent Rollout and Multidimensional Assignment with the
Auction Algorithm [0.0]
We consider an extension of the rollout algorithm that applies to constrained deterministic dynamic programming.
We show that if the base produces a feasible solution, the rollout algorithm has a cost improvement property.
We show that the cost improvement property is maintained with an alternative implementation that has greatly reduced computational requirements.
arXiv Detail & Related papers (2020-02-18T07:09:06Z) - Reinforcement Learning for POMDP: Partitioned Rollout and Policy
Iteration with Application to Autonomous Sequential Repair Problems [2.6389022766562236]
We consider infinite horizon discounted dynamic programming problems with finite state and control spaces, and partial state observations.
We discuss an algorithm that uses multistep lookahead, truncated rollout with a known base policy, and a terminal cost function approximation.
arXiv Detail & Related papers (2020-02-11T02:38:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.