Adaptive Risk Tendency: Nano Drone Navigation in Cluttered Environments
with Distributional Reinforcement Learning
- URL: http://arxiv.org/abs/2203.14749v1
- Date: Mon, 28 Mar 2022 13:39:58 GMT
- Title: Adaptive Risk Tendency: Nano Drone Navigation in Cluttered Environments
with Distributional Reinforcement Learning
- Authors: Cheng Liu, Erik-Jan van Kampen, Guido C.H.E. de Croon
- Abstract summary: We present a distributional reinforcement learning framework to learn adaptive risk tendency policies.
We show our algorithm can adjust its risk-sensitivity on the fly both in simulation and real-world experiments.
- Score: 17.940958199767234
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Enabling robots with the capability of assessing risk and making risk-aware
decisions is widely considered a key step toward ensuring robustness for robots
operating under uncertainty. In this paper, we consider the specific case of a
nano drone robot learning to navigate an apriori unknown environment while
avoiding obstacles under partial observability. We present a distributional
reinforcement learning framework in order to learn adaptive risk tendency
policies. Specifically, we propose to use tail conditional variance of the
learnt action-value distribution as an uncertainty measurement, and use a
exponentially weighted average forecasting algorithm to automatically adapt the
risk-tendency at run-time based on the observed uncertainty in the environment.
We show our algorithm can adjust its risk-sensitivity on the fly both in
simulation and real-world experiments and achieving better performance than
risk-neutral policy or risk-averse policies. Code and real-world experiment
video can be found in this repository:
\url{https://github.com/tudelft/risk-sensitive-rl.git}
Related papers
- Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - Learning Risk-Aware Quadrupedal Locomotion using Distributional Reinforcement Learning [12.156082576280955]
Deployment in hazardous environments requires robots to understand the risks associated with their actions and movements to prevent accidents.
We propose a risk sensitive locomotion training method employing distributional reinforcement learning to consider safety explicitly.
We show emergent risk sensitive locomotion behavior in simulation and on the quadrupedal robot ANYmal.
arXiv Detail & Related papers (2023-09-25T16:05:32Z) - One Risk to Rule Them All: A Risk-Sensitive Perspective on Model-Based
Offline Reinforcement Learning [25.218430053391884]
We propose risk-sensitivity as a mechanism to jointly address both of these issues.
Risk-aversion to aleatoric uncertainty discourages actions that may result in poor outcomes due to environmentity.
Our experiments show that our algorithm achieves competitive performance on deterministic benchmarks.
arXiv Detail & Related papers (2022-11-30T21:24:11Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Automatic Risk Adaptation in Distributional Reinforcement Learning [26.113528145137497]
The use of Reinforcement Learning (RL) agents in practical applications requires the consideration of suboptimal outcomes.
This is especially important in safety-critical environments, where errors can lead to high costs or damage.
We show reduced failure rates by up to a factor of 7 and improved generalization performance by up to 14% compared to both risk-aware and risk-agnostic agents.
arXiv Detail & Related papers (2021-06-11T11:31:04Z) - XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision
Trees [55.9643422180256]
We present a novel sensor-based learning navigation algorithm to compute a collision-free trajectory for a robot in dense and dynamic environments.
Our approach uses deep reinforcement learning-based expert policy that is trained using a sim2real paradigm.
We highlight the benefits of our algorithm in simulated environments and navigating a Clearpath Jackal robot among moving pedestrians.
arXiv Detail & Related papers (2021-04-22T01:33:10Z) - Addressing Inherent Uncertainty: Risk-Sensitive Behavior Generation for
Automated Driving using Distributional Reinforcement Learning [0.0]
We propose a two-step approach for risk-sensitive behavior generation for self-driving vehicles.
First, we learn an optimal policy in an uncertain environment with Deep Distributional Reinforcement Learning.
During execution, the optimal risk-sensitive action is selected by applying established risk criteria.
arXiv Detail & Related papers (2021-02-05T11:45:12Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z) - Improving Robustness via Risk Averse Distributional Reinforcement
Learning [13.467017642143581]
Robustness is critical when the policies are trained in simulations instead of real world environment.
We propose a risk-aware algorithm to learn robust policies in order to bridge the gap between simulation training and real-world implementation.
arXiv Detail & Related papers (2020-05-01T20:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.