Simplified Continuous High Dimensional Belief Space Planning with
Adaptive Probabilistic Belief-dependent Constraints
- URL: http://arxiv.org/abs/2302.06697v1
- Date: Mon, 13 Feb 2023 21:22:47 GMT
- Title: Simplified Continuous High Dimensional Belief Space Planning with
Adaptive Probabilistic Belief-dependent Constraints
- Authors: Andrey Zhitnikov, Vadim Indelman
- Abstract summary: Under uncertainty in partially observable domains, also known as Belief Space Planning, online decision making is a fundamental problem.
We present a technique to adaptively accept or discard a candidate action sequence with respect to a probabilistic belief-dependent constraint.
We apply our method to active SLAM, a highly challenging problem of high dimensional Belief Space Planning.
- Score: 9.061408029414453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online decision making under uncertainty in partially observable domains,
also known as Belief Space Planning, is a fundamental problem in robotics and
Artificial Intelligence. Due to an abundance of plausible future unravelings,
calculating an optimal course of action inflicts an enormous computational
burden on the agent. Moreover, in many scenarios, e.g., information gathering,
it is required to introduce a belief-dependent constraint. Prompted by this
demand, in this paper, we consider a recently introduced probabilistic
belief-dependent constrained POMDP. We present a technique to adaptively accept
or discard a candidate action sequence with respect to a probabilistic
belief-dependent constraint, before expanding a complete set of future
observations samples and without any loss in accuracy. Moreover, using our
proposed framework, we contribute an adaptive method to find a maximal feasible
return (e.g., information gain) in terms of Value at Risk for the candidate
action sequence with substantial acceleration. On top of that, we introduce an
adaptive simplification technique for a probabilistically constrained setting.
Such an approach provably returns an identical-quality solution while
dramatically accelerating online decision making. Our universal framework
applies to any belief-dependent constrained continuous POMDP with parametric
beliefs, as well as nonparametric beliefs represented by particles. In the
context of an information-theoretic constraint, our presented framework
stochastically quantifies if a cumulative information gain along the planning
horizon is sufficiently significant (e.g. for, information gathering, active
SLAM). We apply our method to active SLAM, a highly challenging problem of high
dimensional Belief Space Planning. Extensive realistic simulations corroborate
the superiority of our proposed ideas.
Related papers
- Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Online Constraint Tightening in Stochastic Model Predictive Control: A
Regression Approach [49.056933332667114]
No analytical solutions exist for chance-constrained optimal control problems.
We propose a data-driven approach for learning the constraint-tightening parameters online during control.
Our approach yields constraint-tightening parameters that tightly satisfy the chance constraints.
arXiv Detail & Related papers (2023-10-04T16:22:02Z) - Online POMDP Planning with Anytime Deterministic Guarantees [11.157761902108692]
Planning under uncertainty can be mathematically formalized using partially observable Markov decision processes (POMDPs)
Finding an optimal plan for POMDPs can be computationally expensive and is feasible only for small tasks.
We derive a deterministic relationship between a simplified solution that is easier to obtain and the theoretically optimal one.
arXiv Detail & Related papers (2023-10-03T04:40:38Z) - Risk Aware Belief-dependent Constrained POMDP Planning [9.061408029414453]
Risk awareness is fundamental to an online operating agent.
Existing constrained POMDP algorithms are typically designed for discrete state and observation spaces.
This paper presents a novel formulation for risk-averse belief-dependent constrained POMDP.
arXiv Detail & Related papers (2022-09-06T17:48:13Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Robustness Guarantees for Credal Bayesian Networks via Constraint
Relaxation over Probabilistic Circuits [16.997060715857987]
We develop a method to quantify the robustness of decision functions with respect to credal Bayesian networks.
We show how to obtain a guaranteed upper bound on MARmax in linear time in the size of the circuit.
arXiv Detail & Related papers (2022-05-11T22:37:07Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Online POMDP Planning via Simplification [10.508187462682306]
We develop a novel approach to POMDP planning considering belief-dependent rewards.
Our approach is guaranteed to find the optimal solution of the original problem but with substantial speedup.
We validate our approach in simulation using these bounds and where simplification corresponds to reducing the number of samples, exhibiting a significant computational speedup.
arXiv Detail & Related papers (2021-05-11T18:46:08Z) - Adaptive Belief Discretization for POMDP Planning [7.508023795800546]
Many POMDP solvers uniformly discretize the belief space and give the planning error in terms of the (typically unknown) covering number.
We propose an adaptive belief discretization scheme, and give its associated planning error.
We demonstrate that our algorithm is highly competitive with the state of the art in a variety of scenarios.
arXiv Detail & Related papers (2021-04-15T07:04:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.