Relatively Lazy: Indoor-Outdoor Navigation Using Vision and GNSS
- URL: http://arxiv.org/abs/2101.05107v1
- Date: Wed, 13 Jan 2021 14:43:45 GMT
- Title: Relatively Lazy: Indoor-Outdoor Navigation Using Vision and GNSS
- Authors: Benjamin Congram and Timothy D. Barfoot
- Abstract summary: Relative navigation is a robust and efficient solution for autonomous vision-based path following in difficult environments.
We show that lazy mapping and delaying estimation until a path-tracking error is needed avoids the need to estimate absolute states.
We validate our approach on a real robot through an experiment in a joint indoor-outdoor environment comprising 3.5km of autonomous route repeating.
- Score: 14.39926267531322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual Teach and Repeat (VT&R) has shown relative navigation is a robust and
efficient solution for autonomous vision-based path following in difficult
environments. Adding additional absolute sensors such as Global Navigation
Satellite Systems (GNSS) has the potential to expand the domain of VT&R to
environments where the ability to visually localize is not guaranteed. Our
method of lazy mapping and delaying estimation until a path-tracking error is
needed avoids the need to estimate absolute states. As a result, map
optimization is not required and paths can be driven immediately after being
taught. We validate our approach on a real robot through an experiment in a
joint indoor-outdoor environment comprising 3.5km of autonomous route repeating
across a variety of lighting conditions. We achieve smooth error signals
throughout the runs despite large sections of dropout for each sensor.
Related papers
- IN-Sight: Interactive Navigation through Sight [20.184155117341497]
IN-Sight is a novel approach to self-supervised path planning.
It calculates traversability scores and incorporates them into a semantic map.
To precisely navigate around obstacles, IN-Sight employs a local planner.
arXiv Detail & Related papers (2024-08-01T07:27:54Z) - TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation [5.484041860401147]
TOP-Nav is a novel legged navigation framework that integrates a comprehensive path planner with Terrain awareness, Obstacle avoidance and close-loop Proprioception.
We show that TOP-Nav achieves open-world navigation that the robot can handle terrains or disturbances beyond the distribution of prior knowledge.
arXiv Detail & Related papers (2024-04-23T17:42:45Z) - Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Unsupervised Visual Odometry and Action Integration for PointGoal
Navigation in Indoor Environment [14.363948775085534]
PointGoal navigation in indoor environment is a fundamental task for personal robots to navigate to a specified point.
To improve the PointGoal navigation accuracy without GPS signal, we use visual odometry (VO) and propose a novel action integration module (AIM) trained in unsupervised manner.
Experiments show that the proposed system achieves satisfactory results and outperforms the partially supervised learning algorithms on the popular Gibson dataset.
arXiv Detail & Related papers (2022-10-02T03:12:03Z) - WayFAST: Traversability Predictive Navigation for Field Robots [5.914664791853234]
We present a self-supervised approach for learning to predict traversable paths for wheeled mobile robots.
Our key inspiration is that traction can be estimated for rolling robots using kinodynamic models.
We show that our training pipeline based on online traction estimates is more data-efficient than other-based methods.
arXiv Detail & Related papers (2022-03-22T22:02:03Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Learning High-Speed Flight in the Wild [101.33104268902208]
We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
arXiv Detail & Related papers (2021-10-11T09:43:11Z) - Indoor Point-to-Point Navigation with Deep Reinforcement Learning and
Ultra-wideband [1.6799377888527687]
Moving obstacles and non-line-of-sight occurrences can generate noisy and unreliable signals.
We show how a power-efficient point-to-point local planner, learnt with deep reinforcement learning (RL), can constitute a robust and resilient to noise short-range guidance system complete solution.
Our results show that the computational efficient end-to-end policy learnt in plain simulation, can provide a robust, scalable and at-the-edge low-cost navigation system solution.
arXiv Detail & Related papers (2020-11-18T12:30:36Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.