Real-Time, Flight-Ready, Non-Cooperative Spacecraft Pose Estimation
Using Monocular Imagery
- URL: http://arxiv.org/abs/2101.09553v1
- Date: Sat, 23 Jan 2021 18:40:08 GMT
- Title: Real-Time, Flight-Ready, Non-Cooperative Spacecraft Pose Estimation
Using Monocular Imagery
- Authors: Kevin Black, Shrivu Shankar, Daniel Fonseka, Jacob Deutsch, Abhimanyu
Dhir, and Maruthi R. Akella
- Abstract summary: This work presents a novel convolutional neural network (CNN)-based monocular pose estimation system.
It achieves state-of-the-art accuracy with low computational demand.
The system achieves real-time performance on low-power flight-like hardware.
- Score: 1.1083289076967897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key requirement for autonomous on-orbit proximity operations is the
estimation of a target spacecraft's relative pose (position and orientation).
It is desirable to employ monocular cameras for this problem due to their low
cost, weight, and power requirements. This work presents a novel convolutional
neural network (CNN)-based monocular pose estimation system that achieves
state-of-the-art accuracy with low computational demand. In combination with a
Blender-based synthetic data generation scheme, the system demonstrates the
ability to generalize from purely synthetic training data to real in-space
imagery of the Northrop Grumman Enhanced Cygnus spacecraft. Additionally, the
system achieves real-time performance on low-power flight-like hardware.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Bridging Domain Gap for Flight-Ready Spaceborne Vision [4.14360329494344]
This work presents Spacecraft Pose Network v3 (SPNv3), a Neural Network (NN) for monocular pose estimation of a known, non-cooperative target spacecraft.
SPNv3 is designed and trained to be computationally efficient while providing robustness to spaceborne images that have not been observed during offline training and validation on the ground.
Experiments demonstrate that the final SPNv3 can achieve state-of-the-art pose accuracy on hardware-in-the-loop images from a robotic testbed while having trained exclusively on computer-generated synthetic images.
arXiv Detail & Related papers (2024-09-18T02:56:50Z) - SU-Net: Pose estimation network for non-cooperative spacecraft on-orbit [8.671030148920009]
Spacecraft pose estimation plays a vital role in many on-orbit space missions, such as rendezvous and docking, debris removal, and on-orbit maintenance.
We analyze the radar image characteristics of spacecraft on-orbit, then propose a new deep learning neural Network structure named Dense Residual U-shaped Network (DR-U-Net) to extract image features.
We further introduce a novel neural network based on DR-U-Net, namely Spacecraft U-shaped Network (SU-Net) to achieve end-to-end pose estimation for non-cooperative spacecraft.
arXiv Detail & Related papers (2023-02-21T11:14:01Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body
Decoupling 3D Model [58.70130563417079]
We introduce a new 3D human-body model with a series of decoupled parameters that could freely control the generation of the body.
Compared to the existing manually annotated DensePose-COCO dataset, the synthetic UltraPose has ultra dense image-to-surface correspondences without annotation cost and error.
arXiv Detail & Related papers (2021-10-28T16:24:55Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - Federated Learning in the Sky: Aerial-Ground Air Quality Sensing
Framework with UAV Swarms [53.38353133198842]
Air quality significantly affects human health, it is increasingly important to accurately and timely predict the Air Quality Index (AQI)
This paper proposes a new federated learning-based aerial-ground air quality sensing framework for fine-grained 3D air quality monitoring and forecasting.
For ground sensing systems, we propose a Graph Convolutional neural network-based Long Short-Term Memory (GC-LSTM) model to achieve accurate, real-time and future AQI inference.
arXiv Detail & Related papers (2020-07-23T13:32:47Z) - Robust On-Manifold Optimization for Uncooperative Space Relative
Navigation with a Single Camera [4.129225533930966]
An innovative model-based approach is demonstrated to estimate the six-dimensional pose of a target object relative to the chaser spacecraft using solely a monocular setup.
It is validated on realistic synthetic and laboratory datasets of a rendezvous trajectory with the complex spacecraft Envisat.
arXiv Detail & Related papers (2020-05-14T16:23:04Z) - Assistive Relative Pose Estimation for On-orbit Assembly using
Convolutional Neural Networks [0.0]
In this paper, a convolutional neural network is leveraged to determine the translation and rotation of an object of interest relative to the camera.
The simulation framework designed for assembly task is used to generate dataset for training the modified CNN models.
It is shown that the model performs comparable to the current feature-selection methods and can therefore be used in conjunction with them to provide more reliable estimates.
arXiv Detail & Related papers (2020-01-29T02:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.