Keypoint-based Stereophotoclinometry for Characterizing and Navigating
Small Bodies: A Factor Graph Approach
- URL: http://arxiv.org/abs/2312.06865v1
- Date: Mon, 11 Dec 2023 22:23:43 GMT
- Title: Keypoint-based Stereophotoclinometry for Characterizing and Navigating
Small Bodies: A Factor Graph Approach
- Authors: Travis Driver, Andrew Vaughan, Yang Cheng, Adnan Ansar, John
Christian, Panagiotis Tsiotras
- Abstract summary: This paper proposes the incorporation of techniques from stereophotoclinometry into a keypoint-based structure-from-motion system.
The proposed framework is validated on real imagery of the Cornelia crater on Asteroid 4 Vesta.
- Score: 15.863759076104104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes the incorporation of techniques from
stereophotoclinometry (SPC) into a keypoint-based structure-from-motion (SfM)
system to estimate the surface normal and albedo at detected landmarks to
improve autonomous surface and shape characterization of small celestial bodies
from in-situ imagery. In contrast to the current state-of-the-practice method
for small body shape reconstruction, i.e., SPC, which relies on
human-in-the-loop verification and high-fidelity a priori information to
achieve accurate results, we forego the expensive maplet estimation step and
instead leverage dense keypoint measurements and correspondences from an
autonomous keypoint detection and matching method based on deep learning to
provide the necessary photogrammetric constraints. Moreover, we develop a
factor graph-based approach allowing for simultaneous optimization of the
spacecraft's pose, landmark positions, Sun-relative direction, and surface
normals and albedos via fusion of Sun sensor measurements and image keypoint
measurements. The proposed framework is validated on real imagery of the
Cornelia crater on Asteroid 4 Vesta, along with pose estimation and mapping
comparison against an SPC reconstruction, where we demonstrate precise
alignment to the SPC solution without relying on any a priori camera pose and
topography information or humans-in-the-loop
Related papers
- Tightly-Coupled, Speed-aided Monocular Visual-Inertial Localization in Topological Map [0.7373617024876725]
This paper proposes a novel algorithm for vehicle speed-aided monocular visual-inertial localization using a topological map.
The proposed system aims to address the limitations of existing methods that rely heavily on expensive sensors like GPS and LiDAR.
arXiv Detail & Related papers (2024-11-08T11:55:27Z) - Ground-based image deconvolution with Swin Transformer UNet [2.41675832913699]
We introduce a two-step deconvolution framework using a Swin Transformer architecture.
Our study reveals that the deep learning-based solution introduces a bias, constraining the scope of scientific analysis.
We propose a novel third step relying on the active coefficients in the sparsity wavelet framework.
arXiv Detail & Related papers (2024-05-13T15:30:41Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Towards Scalable Multi-View Reconstruction of Geometry and Materials [27.660389147094715]
We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes.
The input are high-resolution RGBD images captured by a mobile, hand-held capture system with point lights for active illumination.
arXiv Detail & Related papers (2023-06-06T15:07:39Z) - Accurate 3-DoF Camera Geo-Localization via Ground-to-Satellite Image
Matching [102.39635336450262]
We address the problem of ground-to-satellite image geo-localization by matching a query image captured at the ground level against a large-scale database with geotagged satellite images.
Our new method is able to achieve the fine-grained location of a query image, up to pixel size precision of the satellite image.
arXiv Detail & Related papers (2022-03-26T20:10:38Z) - Continuous Self-Localization on Aerial Images Using Visual and Lidar
Sensors [25.87104194833264]
We propose a novel method for geo-tracking in outdoor environments by registering a vehicle's sensor information with aerial imagery of an unseen target region.
We train a model in a metric learning setting to extract visual features from ground and aerial images.
Our method is the first to utilize on-board cameras in an end-to-end differentiable model for metric self-localization on unseen orthophotos.
arXiv Detail & Related papers (2022-03-07T12:25:44Z) - Incorporating Texture Information into Dimensionality Reduction for
High-Dimensional Images [65.74185962364211]
We present a method for incorporating neighborhood information into distance-based dimensionality reduction methods.
Based on a classification of different methods for comparing image patches, we explore a number of different approaches.
arXiv Detail & Related papers (2022-02-18T13:17:43Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Spatial Attention Improves Iterative 6D Object Pose Estimation [52.365075652976735]
We propose a new method for 6D pose estimation refinement from RGB images.
Our main insight is that after the initial pose estimate, it is important to pay attention to distinct spatial features of the object.
We experimentally show that this approach learns to attend to salient spatial features and learns to ignore occluded parts of the object, leading to better pose estimation across datasets.
arXiv Detail & Related papers (2021-01-05T17:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.