AstroVision: Towards Autonomous Feature Detection and Description for
Missions to Small Bodies Using Deep Learning
- URL: http://arxiv.org/abs/2208.02053v1
- Date: Wed, 3 Aug 2022 13:18:44 GMT
- Title: AstroVision: Towards Autonomous Feature Detection and Description for
Missions to Small Bodies Using Deep Learning
- Authors: Travis Driver, Katherine Skinner, Mehregan Dor, Panagiotis Tsiotras
- Abstract summary: This paper introduces AstroVision, a large-scale dataset comprised of 115,970 densely annotated, real images of 16 different small bodies captured during past and ongoing missions.
We leverage AstroVision to develop a set of standardized benchmarks and conduct an exhaustive evaluation of both handcrafted and data-driven feature detection and description methods.
Next, we employ AstroVision for end-to-end training of a state-of-the-art, deep feature detection and description network and demonstrate improved performance on multiple benchmarks.
- Score: 14.35670544436183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Missions to small celestial bodies rely heavily on optical feature tracking
for characterization of and relative navigation around the target body. While
deep learning has led to great advancements in feature detection and
description, training and validating data-driven models for space applications
is challenging due to the limited availability of large-scale, annotated
datasets. This paper introduces AstroVision, a large-scale dataset comprised of
115,970 densely annotated, real images of 16 different small bodies captured
during past and ongoing missions. We leverage AstroVision to develop a set of
standardized benchmarks and conduct an exhaustive evaluation of both
handcrafted and data-driven feature detection and description methods. Next, we
employ AstroVision for end-to-end training of a state-of-the-art, deep feature
detection and description network and demonstrate improved performance on
multiple benchmarks. The full benchmarking pipeline and the dataset will be
made publicly available to facilitate the advancement of computer vision
algorithms for space applications.
Related papers
- MARs: Multi-view Attention Regularizations for Patch-based Feature Recognition of Space Terrain [4.87717454493713]
Current approaches rely on template matching with pre-gathered patch-based features.
We introduce Multi-view Attention Regularizations (MARs) to constrain the channel and spatial attention across multiple feature views.
We demonstrate improved terrain-feature recognition performance by upwards of 85%.
arXiv Detail & Related papers (2024-10-07T16:41:45Z) - LuSNAR:A Lunar Segmentation, Navigation and Reconstruction Dataset based on Muti-sensor for Autonomous Exploration [2.3011380360879237]
Environmental perception and navigation algorithms are the foundation for lunar rovers.
Most of the existing lunar datasets are targeted at a single task.
We propose a multi-task, multi-scene, and multi-label lunar benchmark dataset LuSNAR.
arXiv Detail & Related papers (2024-07-09T02:47:58Z) - Improving Underwater Visual Tracking With a Large Scale Dataset and
Image Enhancement [70.2429155741593]
This paper presents a new dataset and general tracker enhancement method for Underwater Visual Object Tracking (UVOT)
It poses distinct challenges; the underwater environment exhibits non-uniform lighting conditions, low visibility, lack of sharpness, low contrast, camouflage, and reflections from suspended particles.
We propose a novel underwater image enhancement algorithm designed specifically to boost tracking quality.
The method has resulted in a significant performance improvement, of up to 5.0% AUC, of state-of-the-art (SOTA) visual trackers.
arXiv Detail & Related papers (2023-08-30T07:41:26Z) - Large Scale Real-World Multi-Person Tracking [68.27438015329807]
This paper presents a new large scale multi-person tracking dataset -- textttPersonPath22.
It is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20.
arXiv Detail & Related papers (2022-11-03T23:03:13Z) - Scalable semi-supervised dimensionality reduction with GPU-accelerated
EmbedSOM [0.0]
BlosSOM is a high-performance semi-supervised dimensionality reduction software for interactive user-steerable visualization of high-dimensional datasets.
We show the application of BlosSOM on realistic datasets, where it helps to produce high-quality visualizations that incorporate user-specified layout and focus on certain features.
arXiv Detail & Related papers (2022-01-03T15:06:22Z) - A Spacecraft Dataset for Detection, Segmentation and Parts Recognition [42.27081423489484]
In this paper, we release a dataset for spacecraft detection, instance segmentation and part recognition.
The main contribution of this work is the development of the dataset using images of space stations and satellites.
We also provide evaluations with state-of-the-art methods in object detection and instance segmentation as a benchmark for the dataset.
arXiv Detail & Related papers (2021-06-15T14:36:56Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - Visual Tracking by TridentAlign and Context Embedding [71.60159881028432]
We propose novel TridentAlign and context embedding modules for Siamese network-based visual tracking methods.
The performance of the proposed tracker is comparable to that of state-of-the-art trackers, while the proposed tracker runs at real-time speed.
arXiv Detail & Related papers (2020-07-14T08:00:26Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.