LSPnet: A 2D Localization-oriented Spacecraft Pose Estimation Neural
Network
- URL: http://arxiv.org/abs/2104.09248v1
- Date: Mon, 19 Apr 2021 12:46:05 GMT
- Title: LSPnet: A 2D Localization-oriented Spacecraft Pose Estimation Neural
Network
- Authors: Albert Garcia, Mohamed Adel Musallam, Vincent Gaudilliere, Enjie
Ghorbel, Kassem Al Ismaeil, Marcos Perez, Djamila Aouada
- Abstract summary: This work explores a novel methodology, using Convolutional Neural Networks (CNNs) for estimating the pose of uncooperative spacecrafts.
Contrary to other approaches, the proposed CNN directly regresses poses without needing any prior 3D information.
The performed experiments show how this work competes with the state-of-the-art in uncooperative spacecraft pose estimation.
- Score: 10.6872574091924
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Being capable of estimating the pose of uncooperative objects in space has
been proposed as a key asset for enabling safe close-proximity operations such
as space rendezvous, in-orbit servicing and active debris removal. Usual
approaches for pose estimation involve classical computer vision-based
solutions or the application of Deep Learning (DL) techniques. This work
explores a novel DL-based methodology, using Convolutional Neural Networks
(CNNs), for estimating the pose of uncooperative spacecrafts. Contrary to other
approaches, the proposed CNN directly regresses poses without needing any prior
3D information. Moreover, bounding boxes of the spacecraft in the image are
predicted in a simple, yet efficient manner. The performed experiments show how
this work competes with the state-of-the-art in uncooperative spacecraft pose
estimation, including works which require 3D information as well as works which
predict bounding boxes through sophisticated CNNs.
Related papers
- Self Supervised Networks for Learning Latent Space Representations of Human Body Scans and Motions [6.165163123577484]
This paper introduces self-supervised neural network models to tackle several fundamental problems in the field of 3D human body analysis and processing.
We propose VariShaPE, a novel architecture for the retrieval of latent space representations of body shapes and poses.
Second, we complement the estimation of latent codes with MoGeN, a framework that learns the geometry on the latent space itself.
arXiv Detail & Related papers (2024-11-05T19:59:40Z) - Leveraging Neural Radiance Fields for Pose Estimation of an Unknown Space Object during Proximity Operations [14.624172952608653]
We present a novel method that enables an "off-the-shelf" spacecraft pose estimator to be applied on an unknown target.
We train the NeRF model using a sparse collection of images that depict the target, and in turn generate a large dataset that is diverse both in terms of viewpoint and illumination.
We demonstrate that our method successfully enables the training of an off-the-shelf spacecraft pose estimation network from a sparse set of images.
arXiv Detail & Related papers (2024-05-21T12:34:03Z) - OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - A Survey on Deep Learning-Based Monocular Spacecraft Pose Estimation:
Current State, Limitations and Prospects [7.08026800833095]
Estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling vision-based systems in orbit.
Following the general trend in computer vision, more and more works have been focusing on leveraging Deep Learning (DL) methods to address this problem.
Despite promising research-stage results, major challenges preventing the use of such methods in real-life missions still stand in the way.
arXiv Detail & Related papers (2023-05-12T09:52:53Z) - Geometric-aware Pretraining for Vision-centric 3D Object Detection [77.7979088689944]
We propose a novel geometric-aware pretraining framework called GAPretrain.
GAPretrain serves as a plug-and-play solution that can be flexibly applied to multiple state-of-the-art detectors.
We achieve 46.2 mAP and 55.5 NDS on the nuScenes val set using the BEVFormer method, with a gain of 2.7 and 2.1 points, respectively.
arXiv Detail & Related papers (2023-04-06T14:33:05Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery [70.66865453410958]
Articulation-centric 2D/3D pose supervision forms the core training objective in most existing 3D human pose estimation techniques.
We propose a novel framework that relies only on silhouette supervision to adapt a source-trained model-based regressor.
We develop a series of convolution-friendly spatial transformations in order to disentangle a topological-skeleton representation from the raw silhouette.
arXiv Detail & Related papers (2022-04-04T06:58:15Z) - Space Non-cooperative Object Active Tracking with Deep Reinforcement
Learning [1.212848031108815]
We propose an end-to-end active visual tracking method based on DQN algorithm, named as DRLAVT.
It can guide the chasing spacecraft approach to arbitrary space non-cooperative target merely relied on color or RGBD images.
It significantly outperforms position-based visual servoing baseline algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN.
arXiv Detail & Related papers (2021-12-18T06:12:24Z) - SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for
Spatial-Aware Visual Representations [85.38562724999898]
We propose a 2D Image and 3D Point cloud Unsupervised pre-training strategy, called SimIPU.
Specifically, we develop a multi-modal contrastive learning framework that consists of an intra-modal spatial perception module and an inter-modal feature interaction module.
To the best of our knowledge, this is the first study to explore contrastive learning pre-training strategies for outdoor multi-modal datasets.
arXiv Detail & Related papers (2021-12-09T03:27:00Z) - MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty
Propagation [4.202461384355329]
We propose MonoRUn, a novel 3D object detection framework that learns dense correspondences and geometry in a self-supervised manner.
Our proposed approach outperforms current state-of-the-art methods on KITTI benchmark.
arXiv Detail & Related papers (2021-03-23T15:03:08Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.