Virtual Testbed for Monocular Visual Navigation of Small Unmanned
Aircraft Systems
- URL: http://arxiv.org/abs/2007.00737v1
- Date: Wed, 1 Jul 2020 20:35:26 GMT
- Title: Virtual Testbed for Monocular Visual Navigation of Small Unmanned
Aircraft Systems
- Authors: Kyung Kim, Robert C. Leishman, and Scott L. Nykl
- Abstract summary: This work presents a virtual testbed for conducting simulated flight tests over real-world terrain.
It analyzes the real-time performance of visual navigation algorithms at 31 Hz.
This tool was created to find a visual odometry algorithm appropriate for further GPS-denied navigation research on fixed-wing aircraft.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monocular visual navigation methods have seen significant advances in the
last decade, recently producing several real-time solutions for autonomously
navigating small unmanned aircraft systems without relying on GPS. This is
critical for military operations which may involve environments where GPS
signals are degraded or denied. However, testing and comparing visual
navigation algorithms remains a challenge since visual data is expensive to
gather. Conducting flight tests in a virtual environment is an attractive
solution prior to committing to outdoor testing.
This work presents a virtual testbed for conducting simulated flight tests
over real-world terrain and analyzing the real-time performance of visual
navigation algorithms at 31 Hz. This tool was created to ultimately find a
visual odometry algorithm appropriate for further GPS-denied navigation
research on fixed-wing aircraft, even though all of the algorithms were
designed for other modalities. This testbed was used to evaluate three current
state-of-the-art, open-source monocular visual odometry algorithms on a
fixed-wing platform: Direct Sparse Odometry, Semi-Direct Visual Odometry, and
ORB-SLAM2 (with loop closures disabled).
Related papers
- UAV-Based Human Body Detector Selection and Fusion for Geolocated Saliency Map Generation [0.2499907423888049]
The problem of reliably detecting and geolocating objects of different classes in soft real-time is essential in many application areas, such as Search and Rescue performed using Unmanned Aerial Vehicles (UAVs)
This research addresses the complementary problems of system contextual vision-based detector selection, allocation, and execution.
The detection results are fused using a method for building maps of salient locations which takes advantage of a novel sensor model for vision-based detections for both positive and negative observations.
arXiv Detail & Related papers (2024-08-29T13:00:37Z) - NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking [65.24988062003096]
We present NAVSIM, a framework for benchmarking vision-based driving policies.
Our simulation is non-reactive, i.e., the evaluated policy and environment do not influence each other.
NAVSIM enabled a new competition held at CVPR 2024, where 143 teams submitted 463 entries, resulting in several new insights.
arXiv Detail & Related papers (2024-06-21T17:59:02Z) - Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - Radio Map Estimation -- An Open Dataset with Directive Transmitter
Antennas and Initial Experiments [49.61405888107356]
We release a dataset of simulated path loss radio maps together with realistic city maps from real-world locations and aerial images from open datasources.
Initial experiments regarding model architectures, input feature design and estimation of radio maps from aerial images are presented.
arXiv Detail & Related papers (2024-01-12T14:56:45Z) - Vision-Based Autonomous Navigation for Unmanned Surface Vessel in
Extreme Marine Conditions [2.8983738640808645]
This paper presents an autonomous vision-based navigation framework for tracking target objects in extreme marine conditions.
The proposed framework has been thoroughly tested in simulation under extremely reduced visibility due to sandstorms and fog.
The results are compared with state-of-the-art de-hazing methods across the benchmarked MBZIRC simulation dataset.
arXiv Detail & Related papers (2023-08-08T14:25:13Z) - Unsupervised Visual Odometry and Action Integration for PointGoal
Navigation in Indoor Environment [14.363948775085534]
PointGoal navigation in indoor environment is a fundamental task for personal robots to navigate to a specified point.
To improve the PointGoal navigation accuracy without GPS signal, we use visual odometry (VO) and propose a novel action integration module (AIM) trained in unsupervised manner.
Experiments show that the proposed system achieves satisfactory results and outperforms the partially supervised learning algorithms on the popular Gibson dataset.
arXiv Detail & Related papers (2022-10-02T03:12:03Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - The Surprising Effectiveness of Visual Odometry Techniques for Embodied
PointGoal Navigation [100.08270721713149]
PointGoal navigation has been introduced in simulated Embodied AI environments.
Recent advances solve this PointGoal navigation task with near-perfect accuracy (99.6% success)
We show that integrating visual odometry techniques into navigation policies improves the state-of-the-art on the popular Habitat PointNav benchmark by a large margin.
arXiv Detail & Related papers (2021-08-26T02:12:49Z) - Towards bio-inspired unsupervised representation learning for indoor
aerial navigation [4.26712082692017]
This research displays a biologically inspired deep-learning algorithm for simultaneous localization and mapping (SLAM) and its application in a drone navigation system.
We propose an unsupervised representation learning method that yields low-dimensional latent state descriptors, that mitigates the sensitivity to perceptual aliasing, and works on power-efficient, embedded hardware.
The designed algorithm is evaluated on a dataset collected in an indoor warehouse environment, and initial results show the feasibility for robust indoor aerial navigation.
arXiv Detail & Related papers (2021-06-17T08:42:38Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.