A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep
Learning and Synthetic Imagery
- URL: http://arxiv.org/abs/2101.05661v1
- Date: Thu, 14 Jan 2021 15:17:54 GMT
- Title: A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep
Learning and Synthetic Imagery
- Authors: Carson Schubert, Kevin Black, Daniel Fonseka, Abhimanyu Dhir, Jacob
Deutsch, Nihal Dhamani, Gavin Martin, Maruthi Akella
- Abstract summary: Two key challenges currently pose a major barrier to the use of deep learning for vision-based on-orbit proximity operations.
A scarcity of labeled training data (images of a target spacecraft) hinders creation of robust deep learning models.
This paper presents an open-source deep learning pipeline, developed specifically for on-orbit visual navigation applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has become the gold standard for image processing over the past
decade. Simultaneously, we have seen growing interest in orbital activities
such as satellite servicing and debris removal that depend on proximity
operations between spacecraft. However, two key challenges currently pose a
major barrier to the use of deep learning for vision-based on-orbit proximity
operations. Firstly, efficient implementation of these techniques relies on an
effective system for model development that streamlines data curation,
training, and evaluation. Secondly, a scarcity of labeled training data (images
of a target spacecraft) hinders creation of robust deep learning models. This
paper presents an open-source deep learning pipeline, developed specifically
for on-orbit visual navigation applications, that addresses these challenges.
The core of our work consists of two custom software tools built on top of a
cloud architecture that interconnects all stages of the model development
process. The first tool leverages Blender, an open-source 3D graphics toolset,
to generate labeled synthetic training data with configurable model poses
(positions and orientations), lighting conditions, backgrounds, and commonly
observed in-space image aberrations. The second tool is a plugin-based
framework for effective dataset curation and model training; it provides common
functionality like metadata generation and remote storage access to all
projects while giving complete independence to project-specific code.
Time-consuming, graphics-intensive processes such as synthetic image generation
and model training run on cloud-based computational resources which scale to
any scope and budget and allow development of even the largest datasets and
models from any machine. The presented system has been used in the Texas
Spacecraft Laboratory with marked benefits in development speed and quality.
Related papers
- Synthetica: Large Scale Synthetic Data for Robot Perception [21.415878105900187]
We present Synthetica, a method for large-scale synthetic data generation for training robust state estimators.
This paper focuses on the task of object detection, an important problem which can serve as the front-end for most state estimation problems.
We leverage data from a ray-tracing, generating 2.7 million images, to train highly accurate real-time detection transformers.
We demonstrate state-of-the-art performance on the task of object detection while having detectors that run at 50-100Hz which is 9 times faster than the prior SOTA.
arXiv Detail & Related papers (2024-10-28T15:50:56Z) - Training Datasets Generation for Machine Learning: Application to Vision Based Navigation [0.0]
Vision Based Navigation consists in utilizing cameras as precision sensors for GNC after extracting information from images.
To enable the adoption of machine learning for space applications, one of obstacles is the demonstration that available training datasets are adequate to validate the algorithms.
The objective of the study is to generate datasets of images and metadata suitable for training machine learning algorithms.
arXiv Detail & Related papers (2024-09-17T17:34:24Z) - DINOv2: Learning Robust Visual Features without Supervision [75.42921276202522]
This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources.
Most of the technical contributions aim at accelerating and stabilizing the training at scale.
In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature.
arXiv Detail & Related papers (2023-04-14T15:12:19Z) - Design of Convolutional Extreme Learning Machines for Vision-Based
Navigation Around Small Bodies [0.0]
Deep learning architectures such as convolutional neural networks are the standard in computer vision for image processing tasks.
Their accuracy however often comes at the cost of long and computationally expensive training.
A different method known as convolutional extreme learning machine has shown the potential to perform equally with a dramatic decrease in training time.
arXiv Detail & Related papers (2022-10-28T16:24:21Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - On Deep Learning Techniques to Boost Monocular Depth Estimation for
Autonomous Navigation [1.9007546108571112]
Inferring the depth of images is a fundamental inverse problem within the field of Computer Vision.
We propose a new lightweight and fast supervised CNN architecture combined with novel feature extraction models.
We also introduce an efficient surface normals module, jointly with a simple geometric 2.5D loss function, to solve SIDE problems.
arXiv Detail & Related papers (2020-10-13T18:37:38Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.