VAPOR: Legged Robot Navigation in Outdoor Vegetation Using Offline
Reinforcement Learning
- URL: http://arxiv.org/abs/2309.07832v2
- Date: Tue, 19 Sep 2023 21:22:19 GMT
- Title: VAPOR: Legged Robot Navigation in Outdoor Vegetation Using Offline
Reinforcement Learning
- Authors: Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Mohamed Elnoor, Dinesh
Manocha
- Abstract summary: We present VAPOR, a novel method for autonomous legged robot navigation in unstructured, densely vegetated outdoor environments.
Our method trains a novel RL policy using an actor-critic network and arbitrary data collected in real outdoor vegetation.
We observe that VAPOR's actions improve success rates by up to 40%, decrease the average current consumption by up to 2.9%, and decrease the normalized trajectory length by up to 11.2%.
- Score: 53.13393315664145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present VAPOR, a novel method for autonomous legged robot navigation in
unstructured, densely vegetated outdoor environments using offline
Reinforcement Learning (RL). Our method trains a novel RL policy using an
actor-critic network and arbitrary data collected in real outdoor vegetation.
Our policy uses height and intensity-based cost maps derived from 3D LiDAR
point clouds, a goal cost map, and processed proprioception data as state
inputs, and learns the physical and geometric properties of the surrounding
obstacles such as height, density, and solidity/stiffness. The fully-trained
policy's critic network is then used to evaluate the quality of dynamically
feasible velocities generated from a novel context-aware planner. Our planner
adapts the robot's velocity space based on the presence of entrapment inducing
vegetation, and narrow passages in dense environments. We demonstrate our
method's capabilities on a Spot robot in complex real-world outdoor scenes,
including dense vegetation. We observe that VAPOR's actions improve success
rates by up to 40%, decrease the average current consumption by up to 2.9%, and
decrease the normalized trajectory length by up to 11.2% compared to existing
end-to-end offline RL and other outdoor navigation methods.
Related papers
- EnCoMP: Enhanced Covert Maneuver Planning with Adaptive Threat-Aware Visibility Estimation using Offline Reinforcement Learning [0.6597195879147555]
We propose EnCoMP, an enhanced navigation framework to enable robots to navigate covertly in diverse outdoor settings.
We generate high-fidelity multi-map representations, including cover maps, potential threat maps, height maps, and goal maps from LiDAR point clouds.
We demonstrate our method's capabilities on a physical Jackal robot, showing extensive experiments across diverse terrains.
arXiv Detail & Related papers (2024-03-29T07:03:10Z) - Deep Reinforcement Learning with Dynamic Graphs for Adaptive Informative Path Planning [22.48658555542736]
Key task in robotic data acquisition is planning paths through an initially unknown environment to collect observations.
We propose a novel deep reinforcement learning approach for adaptively replanning robot paths to map targets of interest in unknown 3D environments.
arXiv Detail & Related papers (2024-02-07T14:24:41Z) - CoverNav: Cover Following Navigation Planning in Unstructured Outdoor
Environment with Deep Reinforcement Learning [1.0499611180329804]
We propose a novel Deep Reinforcement Learning based algorithm, called CoverNav, for identifying covert and navigable trajectories in offroad terrains and jungle environments.
CoverNav helps robot agents to learn the low elevation terrain using a reward function while penalizing it proportionately when it experiences high elevation.
We evaluate CoverNav's effectiveness in achieving a maximum goal distance of 12 meters and its success rate in different elevation scenarios with and without cover objects.
arXiv Detail & Related papers (2023-08-12T15:19:49Z) - AZTR: Aerial Video Action Recognition with Auto Zoom and Temporal
Reasoning [63.628195002143734]
We propose a novel approach for aerial video action recognition.
Our method is designed for videos captured using UAVs and can run on edge or mobile devices.
We present a learning-based approach that uses customized auto zoom to automatically identify the human target and scale it appropriately.
arXiv Detail & Related papers (2023-03-02T21:24:19Z) - Offline Reinforcement Learning for Visual Navigation [66.88830049694457]
ReViND is the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world.
We show that ReViND can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
arXiv Detail & Related papers (2022-12-16T02:23:50Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - VAE-Loco: Versatile Quadruped Locomotion by Learning a Disentangled Gait
Representation [78.92147339883137]
We show that it is pivotal in increasing controller robustness by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, footstep height and full stance duration.
The use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework.
arXiv Detail & Related papers (2022-05-02T19:49:53Z) - WayFAST: Traversability Predictive Navigation for Field Robots [5.914664791853234]
We present a self-supervised approach for learning to predict traversable paths for wheeled mobile robots.
Our key inspiration is that traction can be estimated for rolling robots using kinodynamic models.
We show that our training pipeline based on online traction estimates is more data-efficient than other-based methods.
arXiv Detail & Related papers (2022-03-22T22:02:03Z) - End-to-end Interpretable Neural Motion Planner [78.69295676456085]
We propose a neural motion planner (NMP) for learning to drive autonomously in complex urban scenarios.
We design a holistic model that takes as input raw LIDAR data and a HD map and produces interpretable intermediate representations.
We demonstrate the effectiveness of our approach in real-world driving data captured in several cities in North America.
arXiv Detail & Related papers (2021-01-17T14:16:12Z) - On Reward Shaping for Mobile Robot Navigation: A Reinforcement Learning
and SLAM Based Approach [7.488722678999039]
We present a map-less path planning algorithm based on Deep Reinforcement Learning (DRL) for mobile robots navigating in unknown environment.
The planner is trained using a reward function shaped based on the online knowledge of the map of the training environment.
The policy trained in the simulation environment can be directly and successfully transferred to the real robot.
arXiv Detail & Related papers (2020-02-10T22:00:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.