The Use of Multi-Scale Fiducial Markers To Aid Takeoff and Landing
Navigation by Rotorcraft
- URL: http://arxiv.org/abs/2309.08769v3
- Date: Tue, 12 Dec 2023 17:29:05 GMT
- Title: The Use of Multi-Scale Fiducial Markers To Aid Takeoff and Landing
Navigation by Rotorcraft
- Authors: Jongwon Lee, Su Yeon Choi, Timothy Bretl
- Abstract summary: This paper quantifies the performance of visual SLAM that leverages multi-scale fiducial markers.
We evaluate performance during takeoff and landing operations in a variety of environmental conditions.
We release all of our results -- our dataset and the code of the implementation of the visual SLAM with fiducial markers -- to the public as open-source.
- Score: 5.528298061166612
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper quantifies the performance of visual SLAM that leverages
multi-scale fiducial markers (i.e., artificial landmarks that can be detected
at a wide range of distances) to show its potential for reliable takeoff and
landing navigation in rotorcraft. Prior work has shown that square markers with
a black-and-white pattern of grid cells can be used to improve the performance
of visual SLAM with color cameras. We extend this prior work to allow nested
marker layouts. We evaluate performance during semi-autonomous takeoff and
landing operations in a variety of environmental conditions by a DJI Matrice
300 RTK rotorcraft with two FLIR Blackfly color cameras, using RTK GNSS to
obtain ground truth pose estimates. Performance measures include absolute
trajectory error and the fraction of the number of estimated poses to the total
frame. We release all of our results -- our dataset and the code of the
implementation of the visual SLAM with fiducial markers -- to the public as
open-source.
Related papers
- Machine Learning Models for Improved Tracking from Range-Doppler Map Images [1.3654846342364306]
We propose novel machine learning models for target detection and uncertainty estimation in range-Doppler map (RDM) images for Ground Moving Target Indicator (GMTI) radars.
We show that by using the outputs of these models, we can significantly improve the performance of a multiple hypothesis tracker for complex multi-target air-to-ground tracking scenarios.
arXiv Detail & Related papers (2024-07-03T14:20:24Z) - Homography Guided Temporal Fusion for Road Line and Marking Segmentation [73.47092021519245]
Road lines and markings are frequently occluded in the presence of moving vehicles, shadow, and glare.
We propose a Homography Guided Fusion (HomoFusion) module to exploit temporally-adjacent video frames for complementary cues.
We show that exploiting available camera intrinsic data and ground plane assumption for cross-frame correspondence can lead to a light-weight network with significantly improved performances in speed and accuracy.
arXiv Detail & Related papers (2024-04-11T10:26:40Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - Comparative Study of Visual SLAM-Based Mobile Robot Localization Using
Fiducial Markers [4.918853205874711]
This paper presents a comparative study of three modes for mobile robot localization based on visual SLAM using fiducial markers.
The reason for comparing the SLAM-based approaches is because previous work has shown their superior performance over feature-only methods.
Hardware experiments show consistent trajectory error levels across the three modes, with the localization mode exhibiting the shortest runtime among them.
arXiv Detail & Related papers (2023-09-08T17:05:24Z) - Rendering the Directional TSDF for Tracking and Multi-Sensor
Registration with Point-To-Plane Scale ICP [29.998917158604694]
Directional Truncated Signed Distance Dense (DTSDF) is an augmentation of the regular TSDF.
We present methods for rendering depth- and color images from the DTSDF.
We observe that our method improves tracking performance and increases re-usability of mapped scenes.
arXiv Detail & Related papers (2023-01-30T11:46:03Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z) - R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual
tightly-coupled state Estimation and mapping package [7.7016529229597035]
R3LIVE takes advantage of measurement of LiDAR, inertial, and visual sensors to achieve robust and accurate state estimation.
R3LIVE is a versatile and well-colored system toward various possible applications.
We open R3LIVE, including all our codes, software utilities, and the mechanical design of our device.
arXiv Detail & Related papers (2021-09-10T22:43:59Z) - ManhattanSLAM: Robust Planar Tracking and Mapping Leveraging Mixture of
Manhattan Frames [41.33367060137042]
RGB-D SLAM system is proposed to utilize the structural information in indoor scenes, allowing for accurate tracking and efficient dense mapping on a CPU.
Planar surfels are directly from sparse planes in our map while non-planar surfels are built by extracting superpixels.
We evaluate our method on public benchmarks for pose estimation, drift and reconstruction accuracy, achieving superior performance compared to other state-of-the-art methods.
arXiv Detail & Related papers (2021-03-28T07:11:57Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking [85.333260415532]
We develop a novel late fusion method to infer the fusion weight maps of both RGB and thermal (T) modalities.
When the appearance cue is unreliable, we take motion cues into account to make the tracker robust.
Numerous results on three recent RGB-T tracking datasets show that the proposed tracker performs significantly better than other state-of-the-art algorithms.
arXiv Detail & Related papers (2020-07-04T08:11:33Z) - ArTIST: Autoregressive Trajectory Inpainting and Scoring for Tracking [80.02322563402758]
One of the core components in online multiple object tracking (MOT) frameworks is associating new detections with existing tracklets.
We introduce a probabilistic autoregressive generative model to score tracklet proposals by directly measuring the likelihood that a tracklet represents natural motion.
arXiv Detail & Related papers (2020-04-16T06:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.