Exploiting Submodular Value Functions For Scaling Up Active Perception
- URL: http://arxiv.org/abs/2009.09696v1
- Date: Mon, 21 Sep 2020 09:11:36 GMT
- Title: Exploiting Submodular Value Functions For Scaling Up Active Perception
- Authors: Yash Satsangi, Shimon Whiteson, Frans A. Oliehoek, Matthijs T. J.
Spaan
- Abstract summary: In active perception tasks, agent aims to select sensory actions that reduce uncertainty about one or more hidden variables.
Partially observable Markov decision processes (POMDPs) provide a natural model for such problems.
As the number of sensors available to the agent grows, the computational cost of POMDP planning grows exponentially.
- Score: 60.81276437097671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In active perception tasks, an agent aims to select sensory actions that
reduce its uncertainty about one or more hidden variables. While partially
observable Markov decision processes (POMDPs) provide a natural model for such
problems, reward functions that directly penalize uncertainty in the agent's
belief can remove the piecewise-linear and convex property of the value
function required by most POMDP planners. Furthermore, as the number of sensors
available to the agent grows, the computational cost of POMDP planning grows
exponentially with it, making POMDP planning infeasible with traditional
methods. In this article, we address a twofold challenge of modeling and
planning for active perception tasks. We show the mathematical equivalence of
$\rho$POMDP and POMDP-IR, two frameworks for modeling active perception tasks,
that restore the PWLC property of the value function. To efficiently plan for
active perception tasks, we identify and exploit the independence properties of
POMDP-IR to reduce the computational cost of solving POMDP-IR (and
$\rho$POMDP). We propose greedy point-based value iteration (PBVI), a new POMDP
planning method that uses greedy maximization to greatly improve scalability in
the action space of an active perception POMDP. Furthermore, we show that,
under certain conditions, including submodularity, the value function computed
using greedy PBVI is guaranteed to have bounded error with respect to the
optimal value function. We establish the conditions under which the value
function of an active perception POMDP is guaranteed to be submodular. Finally,
we present a detailed empirical analysis on a dataset collected from a
multi-camera tracking system employed in a shopping mall. Our method achieves
similar performance to existing methods but at a fraction of the computational
cost leading to better scalability for solving active perception tasks.
Related papers
- Anytime Incremental $ρ$POMDP Planning in Continuous Spaces [5.767643556541711]
We present an anytime solver that dynamically refines belief representations, with formal guarantees of improvement over time.
We demonstrate its effectiveness for common entropy estimators, reducing computational cost by orders of magnitude.
Experimental results show that $rho$POMCPOW outperforms state-of-the-art solvers in both efficiency and solution quality.
arXiv Detail & Related papers (2025-02-04T18:19:40Z) - Automatic Double Reinforcement Learning in Semiparametric Markov Decision Processes with Applications to Long-Term Causal Inference [33.14076284663493]
We study efficient inference on linear functionals of the $Q$-function in time-invariant Markov Decision Process (MDPs)
These restrictions can reduce the overlap requirement and lower the efficiency bound, yielding more precise estimates.
As a special case, we propose a novel adaptive debiased plug-in estimator that uses isotonic-adaptive fitted $Q$-iteration - a new calibration algorithm for MDPs.
arXiv Detail & Related papers (2025-01-12T20:35:28Z) - Towards Cost Sensitive Decision Making [14.279123976398926]
In this work, we consider RL models that may actively acquire features from the environment to improve the decision quality and certainty.
We propose the Active-Acquisition POMDP and identify two types of the acquisition process for different application domains.
In order to assist the agent in the actively-acquired partially-observed environment and alleviate the exploration-exploitation dilemma, we develop a model-based approach.
arXiv Detail & Related papers (2024-10-04T19:48:23Z) - R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models [50.19174067263255]
We introduce prior preference learning techniques and self-revision schedules to help the agent excel in sparse-reward, continuous action, goal-based robotic control POMDP environments.
We show that our agents offer improved performance over state-of-the-art models in terms of cumulative rewards, relative stability, and success rate.
arXiv Detail & Related papers (2024-09-21T18:32:44Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - PAC: Assisted Value Factorisation with Counterfactual Predictions in
Multi-Agent Reinforcement Learning [43.862956745961654]
Multi-agent reinforcement learning (MARL) has witnessed significant progress with the development of value function factorization methods.
In this paper, we show that in partially observable MARL problems, an agent's ordering over its own actions could impose concurrent constraints.
We propose PAC, a new framework leveraging information generated from Counterfactual Predictions of optimal joint action selection.
arXiv Detail & Related papers (2022-06-22T23:34:30Z) - Variance-Aware Off-Policy Evaluation with Linear Function Approximation [85.75516599931632]
We study the off-policy evaluation problem in reinforcement learning with linear function approximation.
We propose an algorithm, VA-OPE, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration.
arXiv Detail & Related papers (2021-06-22T17:58:46Z) - Blending MPC & Value Function Approximation for Efficient Reinforcement
Learning [42.429730406277315]
Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems.
We present a framework for improving on MPC with model-free reinforcement learning (RL)
We show that our approach can obtain performance comparable with MPC with access to true dynamics.
arXiv Detail & Related papers (2020-12-10T11:32:01Z) - Exploration-Exploitation in Constrained MDPs [79.23623305214275]
We investigate the exploration-exploitation dilemma in Constrained Markov Decision Processes (CMDPs)
While learning in an unknown CMDP, an agent should trade-off exploration to discover new information about the MDP.
While the agent will eventually learn a good or optimal policy, we do not want the agent to violate the constraints too often during the learning process.
arXiv Detail & Related papers (2020-03-04T17:03:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.