Spacecraft Pose Estimation Based on Unsupervised Domain Adaptation and
on a 3D-Guided Loss Combination
- URL: http://arxiv.org/abs/2212.13415v1
- Date: Tue, 27 Dec 2022 08:57:46 GMT
- Title: Spacecraft Pose Estimation Based on Unsupervised Domain Adaptation and
on a 3D-Guided Loss Combination
- Authors: Juan Ignacio Bravo P\'erez-Villar, \'Alvaro Garc\'ia-Mart\'in, Jes\'us
Besc\'os
- Abstract summary: Spacecraft pose estimation is a key task to enable space missions in which two spacecrafts must navigate around each other.
Current state-of-the-art algorithms for pose estimation employ data-driven techniques.
There is an absence of real training data for spacecraft imaged in space conditions due to the costs and difficulties associated with the space environment.
This has motivated the introduction of 3D data simulators, solving the issue of data availability but introducing a large gap between the training (source) and test (target) domains.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spacecraft pose estimation is a key task to enable space missions in which
two spacecrafts must navigate around each other. Current state-of-the-art
algorithms for pose estimation employ data-driven techniques. However, there is
an absence of real training data for spacecraft imaged in space conditions due
to the costs and difficulties associated with the space environment. This has
motivated the introduction of 3D data simulators, solving the issue of data
availability but introducing a large gap between the training (source) and test
(target) domains. We explore a method that incorporates 3D structure into the
spacecraft pose estimation pipeline to provide robustness to intensity domain
shift and we present an algorithm for unsupervised domain adaptation with
robust pseudo-labelling. Our solution has ranked second in the two categories
of the 2021 Pose Estimation Challenge organised by the European Space Agency
and the Stanford University, achieving the lowest average error over the two
categories.
Related papers
- Robust 3D Semantic Occupancy Prediction with Calibration-free Spatial Transformation [32.50849425431012]
For autonomous cars equipped with multi-camera and LiDAR, it is critical to aggregate multi-sensor information into a unified 3D space for accurate and robust predictions.
Recent methods are mainly built on the 2D-to-3D transformation that relies on sensor calibration to project the 2D image information into the 3D space.
In this work, we propose a calibration-free spatial transformation based on vanilla attention to implicitly model the spatial correspondence.
arXiv Detail & Related papers (2024-11-19T02:40:42Z) - OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - Syn-to-Real Unsupervised Domain Adaptation for Indoor 3D Object Detection [50.448520056844885]
We propose a novel framework for syn-to-real unsupervised domain adaptation in indoor 3D object detection.
Our adaptation results from synthetic dataset 3D-FRONT to real-world datasets ScanNetV2 and SUN RGB-D demonstrate remarkable mAP25 improvements of 9.7% and 9.1% over Source-Only baselines.
arXiv Detail & Related papers (2024-06-17T08:18:41Z) - SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning
Capabilities [59.39858959066982]
understanding and reasoning about spatial relationships is a fundamental capability for Visual Question Answering (VQA) and robotics.
We develop an automatic 3D spatial VQA data generation framework that scales up to 2 billion VQA examples on 10 million real-world images.
By training a VLM on such data, we significantly enhance its ability on both qualitative and quantitative spatial VQA.
arXiv Detail & Related papers (2024-01-22T18:01:01Z) - Auxiliary Tasks Benefit 3D Skeleton-based Human Motion Prediction [106.06256351200068]
This paper introduces a model learning framework with auxiliary tasks.
In our auxiliary tasks, partial body joints' coordinates are corrupted by either masking or adding noise.
We propose a novel auxiliary-adapted transformer, which can handle incomplete, corrupted motion data.
arXiv Detail & Related papers (2023-08-17T12:26:11Z) - A Survey on Deep Learning-Based Monocular Spacecraft Pose Estimation:
Current State, Limitations and Prospects [7.08026800833095]
Estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling vision-based systems in orbit.
Following the general trend in computer vision, more and more works have been focusing on leveraging Deep Learning (DL) methods to address this problem.
Despite promising research-stage results, major challenges preventing the use of such methods in real-life missions still stand in the way.
arXiv Detail & Related papers (2023-05-12T09:52:53Z) - Bridging the Domain Gap in Satellite Pose Estimation: a Self-Training
Approach based on Geometrical Constraints [44.15764885297801]
We propose a self-training framework based on the domain-agnostic geometrical constraints.
Specifically, we train a neural network to predict the 2D keypoints of a satellite and then use them to estimate the pose.
Experimental results show that our method adapts well to the target domain.
arXiv Detail & Related papers (2022-12-23T01:47:36Z) - Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery [70.66865453410958]
Articulation-centric 2D/3D pose supervision forms the core training objective in most existing 3D human pose estimation techniques.
We propose a novel framework that relies only on silhouette supervision to adapt a source-trained model-based regressor.
We develop a series of convolution-friendly spatial transformations in order to disentangle a topological-skeleton representation from the raw silhouette.
arXiv Detail & Related papers (2022-04-04T06:58:15Z) - LSPnet: A 2D Localization-oriented Spacecraft Pose Estimation Neural
Network [10.6872574091924]
This work explores a novel methodology, using Convolutional Neural Networks (CNNs) for estimating the pose of uncooperative spacecrafts.
Contrary to other approaches, the proposed CNN directly regresses poses without needing any prior 3D information.
The performed experiments show how this work competes with the state-of-the-art in uncooperative spacecraft pose estimation.
arXiv Detail & Related papers (2021-04-19T12:46:05Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.