Risk Conditioned Neural Motion Planning
- URL: http://arxiv.org/abs/2108.01851v1
- Date: Wed, 4 Aug 2021 05:33:52 GMT
- Title: Risk Conditioned Neural Motion Planning
- Authors: Xin Huang, Meng Feng, Ashkan Jasour, Guy Rosman, Brian Williams
- Abstract summary: Risk-bounded motion planning is an important yet difficult problem for safety-critical tasks.
We propose an extension of soft actor critic model to estimate the execution risk of a plan through a risk critic.
We show the advantage of our model in terms of both computational time and plan quality, compared to a state-of-the-art mathematical programming baseline.
- Score: 14.018786843419862
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Risk-bounded motion planning is an important yet difficult problem for
safety-critical tasks. While existing mathematical programming methods offer
theoretical guarantees in the context of constrained Markov decision processes,
they either lack scalability in solving larger problems or produce conservative
plans. Recent advances in deep reinforcement learning improve scalability by
learning policy networks as function approximators. In this paper, we propose
an extension of soft actor critic model to estimate the execution risk of a
plan through a risk critic and produce risk-bounded policies efficiently by
adding an extra risk term in the loss function of the policy network. We define
the execution risk in an accurate form, as opposed to approximating it through
a summation of immediate risks at each time step that leads to conservative
plans. Our proposed model is conditioned on a continuous spectrum of risk
bounds, allowing the user to adjust the risk-averse level of the agent on the
fly. Through a set of experiments, we show the advantage of our model in terms
of both computational time and plan quality, compared to a state-of-the-art
mathematical programming baseline, and validate its performance in more
complicated scenarios, including nonlinear dynamics and larger state space.
Related papers
- Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - RASR: Risk-Averse Soft-Robust MDPs with EVaR and Entropic Risk [28.811725782388688]
We propose and analyze a new framework to jointly model the risk associated with uncertainties in finite-horizon and discounted infinite-horizon MDPs.
We show that when the risk-aversion is defined using either EVaR or the entropic risk, the optimal policy in RASR can be computed efficiently using a new dynamic program formulation with a time-dependent risk level.
arXiv Detail & Related papers (2022-09-09T00:34:58Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Reinforcement Learning with Dynamic Convex Risk Measures [0.0]
We develop an approach for solving time-consistent risk-sensitive optimization problems using model-free reinforcement learning (RL)
We employ a time-consistent dynamic programming principle to determine the value of a particular policy, and develop policy gradient update rules.
arXiv Detail & Related papers (2021-12-26T16:41:05Z) - Deep Reinforcement Learning for Equal Risk Pricing and Hedging under
Dynamic Expectile Risk Measures [1.2891210250935146]
We show that a new off-policy deterministic actor-critic deep reinforcement learning algorithm can identify high quality time consistent hedging policies for options.
Our numerical experiments, which involve both a simple vanilla option and a more exotic basket option, confirm that the new algorithm can produce 1) in simple environments, nearly optimal hedging policies, and highly accurate prices, simultaneously for a range of maturities.
Overall, hedging strategies that actually outperform the strategies produced using static risk measures when the risk is evaluated at later points of time.
arXiv Detail & Related papers (2021-09-09T02:52:06Z) - Risk-Averse Stochastic Shortest Path Planning [25.987787625028204]
We show that optimal, stationary, Markovian policies exist and can be found via a special Bellman's equation.
A rover navigation MDP is used to illustrate the proposed methodology with conditional-value-at-risk (CVaR) and entropic-value-at-risk (EVaR) coherent risk measures.
arXiv Detail & Related papers (2021-03-26T20:49:14Z) - Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds
Globally Optimal Policy [95.98698822755227]
We make the first attempt to study risk-sensitive deep reinforcement learning under the average reward setting with the variance risk criteria.
We propose an actor-critic algorithm that iteratively and efficiently updates the policy, the Lagrange multiplier, and the Fenchel dual variable.
arXiv Detail & Related papers (2020-12-28T05:02:26Z) - Entropic Risk Constrained Soft-Robust Policy Optimization [12.362670630646805]
It is important in high-stakes domains to quantify and manage risk induced by model uncertainties.
We propose an entropic risk constrained policy gradient and actor-critic algorithms that are risk-averse to the model uncertainty.
arXiv Detail & Related papers (2020-06-20T23:48:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.