AstroSLAM: Autonomous Monocular Navigation in the Vicinity of a
Celestial Small Body -- Theory and Experiments
- URL: http://arxiv.org/abs/2212.00350v1
- Date: Thu, 1 Dec 2022 08:24:21 GMT
- Title: AstroSLAM: Autonomous Monocular Navigation in the Vicinity of a
Celestial Small Body -- Theory and Experiments
- Authors: Mehregan Dor, Travis Driver, Kenneth Getzandanner, Panagiotis Tsiotras
- Abstract summary: We propose a vision-based solution for autonomous online navigation around an unknown target small celestial body.
AstroSLAM is predicated on the formulation of the SLAM problem as an incrementally growing factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine.
We incorporate orbital motion constraints into the factor graph by devising a novel relative dynamics factor, which links the relative pose of the spacecraft to the problem of predicting trajectories stemming from the motion of the spacecraft in the vicinity of the small body.
- Score: 13.14201332737947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose AstroSLAM, a standalone vision-based solution for autonomous
online navigation around an unknown target small celestial body. AstroSLAM is
predicated on the formulation of the SLAM problem as an incrementally growing
factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine.
By combining sensor fusion with orbital motion priors, we achieve improved
performance over a baseline SLAM solution. We incorporate orbital motion
constraints into the factor graph by devising a novel relative dynamics factor,
which links the relative pose of the spacecraft to the problem of predicting
trajectories stemming from the motion of the spacecraft in the vicinity of the
small body. We demonstrate the excellent performance of AstroSLAM using both
real legacy mission imagery and trajectory data courtesy of NASA's Planetary
Data System, as well as real in-lab imagery data generated on a 3
degree-of-freedom spacecraft simulator test-bed.
Related papers
- LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Initialization of Monocular Visual Navigation for Autonomous Agents Using Modified Structure from Small Motion [13.69678622755871]
We propose a standalone monocular visual Simultaneous Localization and Mapping (vSLAM) pipeline for autonomous space robots.
Our method, a state-of-the-art factor graph optimization pipeline, extends Structure from Small Motion to robustly initialize a monocular agent in spacecraft inspection trajectories.
We validate our approach on realistic, simulated satellite inspection image sequences with a tumbling spacecraft and demonstrate the method's effectiveness.
arXiv Detail & Related papers (2024-09-24T21:33:14Z) - Homography Guided Temporal Fusion for Road Line and Marking Segmentation [73.47092021519245]
Road lines and markings are frequently occluded in the presence of moving vehicles, shadow, and glare.
We propose a Homography Guided Fusion (HomoFusion) module to exploit temporally-adjacent video frames for complementary cues.
We show that exploiting available camera intrinsic data and ground plane assumption for cross-frame correspondence can lead to a light-weight network with significantly improved performances in speed and accuracy.
arXiv Detail & Related papers (2024-04-11T10:26:40Z) - Characterizing Satellite Geometry via Accelerated 3D Gaussian Splatting [0.0]
We present an approach for mapping of satellites on orbit based on 3D Gaussian Splatting.
We demonstrate model training and 3D rendering performance on a hardware-in-the-loop satellite mock-up.
Our model is shown to be capable of training on-board and rendering higher quality novel views of an unknown satellite nearly 2 orders of magnitude faster than previous NeRF-based algorithms.
arXiv Detail & Related papers (2024-01-05T00:49:56Z) - FedSN: A Federated Learning Framework over Heterogeneous LEO Satellite Networks [18.213174641216884]
A large number of Low Earth Orbit (LEO) satellites have been launched and deployed successfully in space by commercial companies, such as SpaceX.
Due to multimodal sensors equipped by the LEO satellites, they serve not only for communication but also for various machine learning applications, such as space modulation recognition, remote sensing image classification, etc.
We propose FedSN as a general FL framework to tackle the above challenges, and fully explore data diversity on LEO satellites.
arXiv Detail & Related papers (2023-11-02T14:47:06Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Robust On-Manifold Optimization for Uncooperative Space Relative
Navigation with a Single Camera [4.129225533930966]
An innovative model-based approach is demonstrated to estimate the six-dimensional pose of a target object relative to the chaser spacecraft using solely a monocular setup.
It is validated on realistic synthetic and laboratory datasets of a rendezvous trajectory with the complex spacecraft Envisat.
arXiv Detail & Related papers (2020-05-14T16:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.