CoverNav: Cover Following Navigation Planning in Unstructured Outdoor
Environment with Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2308.06594v1
- Date: Sat, 12 Aug 2023 15:19:49 GMT
- Title: CoverNav: Cover Following Navigation Planning in Unstructured Outdoor
Environment with Deep Reinforcement Learning
- Authors: Jumman Hossain, Abu-Zaher Faridee, Nirmalya Roy, Anjan Basak, Derrik
E. Asher
- Abstract summary: We propose a novel Deep Reinforcement Learning based algorithm, called CoverNav, for identifying covert and navigable trajectories in offroad terrains and jungle environments.
CoverNav helps robot agents to learn the low elevation terrain using a reward function while penalizing it proportionately when it experiences high elevation.
We evaluate CoverNav's effectiveness in achieving a maximum goal distance of 12 meters and its success rate in different elevation scenarios with and without cover objects.
- Score: 1.0499611180329804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous navigation in offroad environments has been extensively studied in
the robotics field. However, navigation in covert situations where an
autonomous vehicle needs to remain hidden from outside observers remains an
underexplored area. In this paper, we propose a novel Deep Reinforcement
Learning (DRL) based algorithm, called CoverNav, for identifying covert and
navigable trajectories with minimal cost in offroad terrains and jungle
environments in the presence of observers. CoverNav focuses on unmanned ground
vehicles seeking shelters and taking covers while safely navigating to a
predefined destination. Our proposed DRL method computes a local cost map that
helps distinguish which path will grant the maximal covertness while
maintaining a low cost trajectory using an elevation map generated from 3D
point cloud data, the robot's pose, and directed goal information. CoverNav
helps robot agents to learn the low elevation terrain using a reward function
while penalizing it proportionately when it experiences high elevation. If an
observer is spotted, CoverNav enables the robot to select natural obstacles
(e.g., rocks, houses, disabled vehicles, trees, etc.) and use them as shelters
to hide behind. We evaluate CoverNav using the Unity simulation environment and
show that it guarantees dynamically feasible velocities in the terrain when fed
with an elevation map generated by another DRL based navigation algorithm.
Additionally, we evaluate CoverNav's effectiveness in achieving a maximum goal
distance of 12 meters and its success rate in different elevation scenarios
with and without cover objects. We observe competitive performance comparable
to state of the art (SOTA) methods without compromising accuracy.
Related papers
- EnCoMP: Enhanced Covert Maneuver Planning with Adaptive Threat-Aware Visibility Estimation using Offline Reinforcement Learning [0.6597195879147555]
We propose EnCoMP, an enhanced navigation framework to enable robots to navigate covertly in diverse outdoor settings.
We generate high-fidelity multi-map representations, including cover maps, potential threat maps, height maps, and goal maps from LiDAR point clouds.
We demonstrate our method's capabilities on a physical Jackal robot, showing extensive experiments across diverse terrains.
arXiv Detail & Related papers (2024-03-29T07:03:10Z) - RoadRunner -- Learning Traversability Estimation for Autonomous Off-road Driving [13.101416329887755]
We present RoadRunner, a framework capable of predicting terrain traversability and an elevation map directly from camera and LiDAR sensor inputs.
RoadRunner enables reliable autonomous navigation, by fusing sensory information, handling of uncertainty, and generation of contextually informed predictions.
We demonstrate the effectiveness of RoadRunner in enabling safe and reliable off-road navigation at high speeds in multiple real-world driving scenarios through unstructured desert environments.
arXiv Detail & Related papers (2024-02-29T16:47:54Z) - VAPOR: Legged Robot Navigation in Outdoor Vegetation Using Offline
Reinforcement Learning [53.13393315664145]
We present VAPOR, a novel method for autonomous legged robot navigation in unstructured, densely vegetated outdoor environments.
Our method trains a novel RL policy using an actor-critic network and arbitrary data collected in real outdoor vegetation.
We observe that VAPOR's actions improve success rates by up to 40%, decrease the average current consumption by up to 2.9%, and decrease the normalized trajectory length by up to 11.2%.
arXiv Detail & Related papers (2023-09-14T16:21:27Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - Machine Learning Based Path Planning for Improved Rover Navigation
(Pre-Print Version) [26.469069930513857]
Enhanced AutoNav (ENav) is the baseline surface navigation software for NASA's Perseverance rover.
ENav sorts a list of candidate paths for the rover to traverse, then uses the Approximate Clearance Evaluation (ACE) algorithm to evaluate whether the most highly ranked paths are safe.
ACE is crucial for maintaining the safety of the rover, but is computationally expensive.
We present two computations that more effectively rank the candidate paths before ACE evaluation.
arXiv Detail & Related papers (2020-11-11T19:18:47Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - Active Visual Information Gathering for Vision-Language Navigation [115.40768457718325]
Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.
One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment.
This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent VLN policy.
arXiv Detail & Related papers (2020-07-15T23:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.