High-accuracy Vision-Based Attitude Estimation System for Air-Bearing
Spacecraft Simulators
- URL: http://arxiv.org/abs/2312.08146v1
- Date: Wed, 13 Dec 2023 13:55:36 GMT
- Title: High-accuracy Vision-Based Attitude Estimation System for Air-Bearing
Spacecraft Simulators
- Authors: Fabio Ornati, Gianfranco Di Domenico, Paolo Panicucci, Francesco
Topputo
- Abstract summary: This paper shows a novel and versatile method to compute the attitude of rotational air-bearing platforms using a monocular camera and sets of fiducial markers.
Auto-calibration procedures to perform a preliminary estimation of the system parameters are shown.
Results show expected 1-sigma accuracies in the order of $sim$ 12 arcsec and $sim$ 37 arcsec for about- and cross-boresight rotations of the platform.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Air-bearing platforms for simulating the rotational dynamics of satellites
require highly precise ground truth systems. Unfortunately, commercial motion
capture systems used for this scope are complex and expensive. This paper shows
a novel and versatile method to compute the attitude of rotational air-bearing
platforms using a monocular camera and sets of fiducial markers. The work
proposes a geometry-based iterative algorithm that is significantly more
accurate than other literature methods that involve the solution of the
Perspective-n-Point problem. Additionally, auto-calibration procedures to
perform a preliminary estimation of the system parameters are shown. The
developed methodology is deployed onto a Raspberry Pi 4 micro-computer and
tested with a set of LED markers. Data obtained with this setup are compared
against computer simulations of the same system to understand and validate the
attitude estimation performances. Simulation results show expected 1-sigma
accuracies in the order of $\sim$ 12 arcsec and $\sim$ 37 arcsec for about- and
cross-boresight rotations of the platform, and average latency times of 6 ms.
Related papers
- Fried Parameter Estimation from Single Wavefront Sensor Image with Artificial Neural Networks [0.9883562565157392]
Atmospheric turbulence degrades the quality of astronomical observations in ground-based telescopes, leading to distorted and blurry images.
Adaptive Optics (AO) systems are designed to counteract these effects, using atmospheric measurements captured by a wavefront sensor to make real-time corrections to the incoming wavefront.
The Fried parameter, r0, characterises the strength of atmospheric turbulence and is an essential control parameter for optimising the performance of AO systems.
We develop a novel data-driven approach, adapting machine learning methods from computer vision for Fried parameter estimation from a single Shack-Hartmann or pyramid wavefront sensor image.
arXiv Detail & Related papers (2025-04-23T18:16:07Z) - Accurate Pose Estimation for Flight Platforms based on Divergent Multi-Aperture Imaging System [8.98211713131741]
Vision-based pose estimation plays a crucial role in the autonomous navigation of flight platforms.
Field of view and spatial resolution of the camera limit pose estimation accuracy.
This paper designs a divergent multi-aperture imaging system to achieve simultaneous observation of a large field of view and high spatial resolution.
arXiv Detail & Related papers (2025-02-27T02:51:09Z) - Towards Real-Time 2D Mapping: Harnessing Drones, AI, and Computer Vision for Advanced Insights [0.0]
This paper presents an advanced mapping system that combines drone imagery with machine learning and computer vision to overcome challenges in speed, accuracy, and adaptability across diverse terrains.
The system produces seamless, high-resolution maps with minimal latency, offering strategic advantages in defense operations.
arXiv Detail & Related papers (2024-12-28T16:47:18Z) - ESVO2: Direct Visual-Inertial Odometry with Stereo Event Cameras [33.81592783496106]
Event-based visual odometry aims at solving tracking and mapping sub-problems in parallel.
We build an event-based stereo visual-inertial odometry system on top of our previous direct pipeline Event-based Stereo Visual Odometry.
arXiv Detail & Related papers (2024-10-12T05:35:27Z) - Deep Learning for Inertial Sensor Alignment [1.9773109138840514]
We propose a data-driven approach to learn the yaw mounting angle of a smartphone equipped with an inertial measurement unit (IMU) and strapped to a car.
The proposed model uses only the accelerometer and gyroscope readings from an IMU as input.
The trained model is deployed on an Android device and evaluated in real-time to test the accuracy of the estimated yaw mounting angle.
arXiv Detail & Related papers (2022-12-10T07:50:29Z) - A Flexible-Frame-Rate Vision-Aided Inertial Object Tracking System for
Mobile Devices [3.4836209951879957]
We propose a flexible-frame-rate object pose estimation and tracking system for mobile devices.
Inertial measurement unit (IMU) pose propagation is performed on the client side for high speed tracking, and RGB image-based 3D pose estimation is performed on the server side.
Our system supports flexible frame rates up to 120 FPS and guarantees high precision and real-time tracking on low-end devices.
arXiv Detail & Related papers (2022-10-22T15:26:50Z) - Stable Object Reorientation using Contact Plane Registration [32.19425880216469]
We propose to overcome the critical issue of modelling multimodality in the space of rotations by using a conditional generative model.
Our system is capable of operating from noisy and partially-observed pointcloud observations captured by real world depth cameras.
arXiv Detail & Related papers (2022-08-18T17:10:28Z) - Satellite Image Time Series Analysis for Big Earth Observation Data [50.591267188664666]
This paper describes sits, an open-source R package for satellite image time series analysis using machine learning.
We show that this approach produces high accuracy for land use and land cover maps through a case study in the Cerrado biome.
arXiv Detail & Related papers (2022-04-24T15:23:25Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Visual-tactile sensing for Real-time liquid Volume Estimation in
Grasping [58.50342759993186]
We propose a visuo-tactile model for realtime estimation of the liquid inside a deformable container.
We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor.
The robotic system is well controlled and adjusted based on the estimation model in real time.
arXiv Detail & Related papers (2022-02-23T13:38:31Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Multi-scale Interaction for Real-time LiDAR Data Segmentation on an
Embedded Platform [62.91011959772665]
Real-time semantic segmentation of LiDAR data is crucial for autonomously driving vehicles.
Current approaches that operate directly on the point cloud use complex spatial aggregation operations.
We propose a projection-based method, called Multi-scale Interaction Network (MINet), which is very efficient and accurate.
arXiv Detail & Related papers (2020-08-20T19:06:11Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z) - Spatiotemporal Camera-LiDAR Calibration: A Targetless and Structureless
Approach [32.15405927679048]
We propose a targetless and structureless camera-DAR calibration method.
Our method combines a closed-form solution with a structureless bundle where the coarse-to-fine approach does not require an initial adjustment on the temporal parameters.
We demonstrate the accuracy and robustness of the proposed method through both simulation and real data experiments.
arXiv Detail & Related papers (2020-01-17T07:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.