Mixing Data-driven and Geometric Models for Satellite Docking Port State Estimation using an RGB or Event Camera
- URL: http://arxiv.org/abs/2409.15581v1
- Date: Mon, 23 Sep 2024 22:28:09 GMT
- Title: Mixing Data-driven and Geometric Models for Satellite Docking Port State Estimation using an RGB or Event Camera
- Authors: Cedric Le Gentil, Jack Naylor, Nuwan Munasinghe, Jasprabhjit Mehami, Benny Dai, Mikhail Asavkin, Donald G. Dansereau, Teresa Vidal-Calleja,
- Abstract summary: This work focuses on satellite-agnostic operations using the recently released Lockheed Martin Mission Augmentation Port (LM-MAP) as the target.
We present a pipeline for automated satellite docking port detection and state estimation using monocular vision data from standard RGB sensing or an event camera.
- Score: 4.9788231201543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In-orbit automated servicing is a promising path towards lowering the cost of satellite operations and reducing the amount of orbital debris. For this purpose, we present a pipeline for automated satellite docking port detection and state estimation using monocular vision data from standard RGB sensing or an event camera. Rather than taking snapshots of the environment, an event camera has independent pixels that asynchronously respond to light changes, offering advantages such as high dynamic range, low power consumption and latency, etc. This work focuses on satellite-agnostic operations (only a geometric knowledge of the actual port is required) using the recently released Lockheed Martin Mission Augmentation Port (LM-MAP) as the target. By leveraging shallow data-driven techniques to preprocess the incoming data to highlight the LM-MAP's reflective navigational aids and then using basic geometric models for state estimation, we present a lightweight and data-efficient pipeline that can be used independently with either RGB or event cameras. We demonstrate the soundness of the pipeline and perform a quantitative comparison of the two modalities based on data collected with a photometrically accurate test bench that includes a robotic arm to simulate the target satellite's uncontrolled motion.
Related papers
- Synthetic Lunar Terrain: A Multimodal Open Dataset for Training and Evaluating Neuromorphic Vision Algorithms [18.85150427551313]
Synthetic Lunar Terrain (SLT) is an open dataset collected from an analogue test site for lunar missions.
It includes several side-by-side captures from event-based and conventional RGB cameras.
The event-stream recorded from the neuromorphic vision sensor of the event-based camera is of particular interest.
arXiv Detail & Related papers (2024-08-30T02:14:33Z) - Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing Images [5.234109158596138]
Deep learning models in satellite onboard enable real-time interpretation of remote sensing images.
This paper proposes a method based on parameter-efficient fine-tuning technology with low-rank adaptation (LoRA) module.
By fine-tuning and updating only 12.4$%$ of the model's total parameters, it is able to achieve 97$%$ to 100$%$ of the performance of full fine-tuning models.
arXiv Detail & Related papers (2024-06-04T15:00:49Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Fast Trajectory End-Point Prediction with Event Cameras for Reactive
Robot Control [4.110120522045467]
In this paper, we propose to exploit the low latency, motion-driven sampling, and data compression properties of event cameras to overcome these issues.
As a use-case, we use a Panda robotic arm to intercept a ball bouncing on a table.
We train the network in simulation to speed up the dataset acquisition and then fine-tune the models on real trajectories.
arXiv Detail & Related papers (2023-02-27T14:14:52Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Exploiting Event Cameras for Spatio-Temporal Prediction of Fast-Changing
Trajectories [7.13400854198045]
This paper investigates trajectory prediction for robotics, to improve the interaction of robots with moving targets.
We apply state of the art machine learning, specifically based on Long-Short Term Memory (LSTM) architectures.
arXiv Detail & Related papers (2020-01-05T14:37:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.