Structure from Motion-based Motion Estimation and 3D Reconstruction of Unknown Shaped Space Debris
- URL: http://arxiv.org/abs/2408.01035v1
- Date: Fri, 2 Aug 2024 06:18:39 GMT
- Title: Structure from Motion-based Motion Estimation and 3D Reconstruction of Unknown Shaped Space Debris
- Authors: Kentaro Uno, Takehiro Matsuoka, Akiyoshi Uchida, Kazuya Yoshida,
- Abstract summary: This paper proposes the Structure from Motion-based algorithm to perform unknown shaped space debris motion estimation with limited resources.
The method is validated with the realistic image dataset generated by the microgravity experiment in a 2D air-floating testbed and 3D kinematic simulation.
- Score: 3.037387520023979
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the boost in the number of spacecraft launches in the current decades, the space debris problem is daily becoming significantly crucial. For sustainable space utilization, the continuous removal of space debris is the most severe problem for humanity. To maximize the reliability of the debris capture mission in orbit, accurate motion estimation of the target is essential. Space debris has lost its attitude and orbit control capabilities, and its shape is unknown due to the break. This paper proposes the Structure from Motion-based algorithm to perform unknown shaped space debris motion estimation with limited resources, where only 2D images are required as input. The method then outputs the reconstructed shape of the unknown object and the relative pose trajectory between the target and the camera simultaneously, which are exploited to estimate the target's motion. The method is quantitatively validated with the realistic image dataset generated by the microgravity experiment in a 2D air-floating testbed and 3D kinematic simulation.
Related papers
- Street Gaussians without 3D Object Tracker [86.62329193275916]
Existing methods rely on labor-intensive manual labeling of object poses to reconstruct dynamic objects in canonical space and move them based on these poses during rendering.
We propose a stable object tracking module by leveraging associations from 2D deep trackers within a 3D object fusion strategy.
We address inevitable tracking errors by further introducing a motion learning strategy in an implicit feature space that autonomously corrects trajectory errors and recovers missed detections.
arXiv Detail & Related papers (2024-12-07T05:49:42Z) - Event-based Structure-from-Orbit [23.97673114572094]
Certain applications in robotics and vision-based navigation require 3D perception of an object undergoing circular or spinning motion in front of a static camera.
We propose event-based structure-from-orbit (eSf), where the aim is to reconstruct the 3D structure of a fast spinning object observed from a static event camera.
arXiv Detail & Related papers (2024-05-10T03:02:03Z) - Instantaneous Perception of Moving Objects in 3D [86.38144604783207]
The perception of 3D motion of surrounding traffic participants is crucial for driving safety.
We propose to leverage local occupancy completion of object point clouds to densify the shape cue, and mitigate the impact of swimming artifacts.
Extensive experiments demonstrate superior performance compared to standard 3D motion estimation approaches.
arXiv Detail & Related papers (2024-05-05T01:07:24Z) - Reconstructing Satellites in 3D from Amateur Telescope Images [42.850623200702394]
This paper proposes a framework for the 3D reconstruction of satellites in low-Earth orbit, utilizing videos captured by small amateur telescopes.
The video data obtained from these telescopes differ significantly from data for standard 3D reconstruction tasks, characterized by intense motion blur, atmospheric turbulence, pervasive background light pollution, extended focal length and constrained observational perspectives.
We apply a customized Structure from Motion (SfM) approach, followed by an improved 3D Gaussian splatting algorithm, to achieve high-fidelity 3D model reconstruction.
arXiv Detail & Related papers (2024-04-29T03:13:09Z) - Characterizing Satellite Geometry via Accelerated 3D Gaussian Splatting [0.0]
We present an approach for mapping of satellites on orbit based on 3D Gaussian Splatting.
We demonstrate model training and 3D rendering performance on a hardware-in-the-loop satellite mock-up.
Our model is shown to be capable of training on-board and rendering higher quality novel views of an unknown satellite nearly 2 orders of magnitude faster than previous NeRF-based algorithms.
arXiv Detail & Related papers (2024-01-05T00:49:56Z) - GraMMaR: Ground-aware Motion Model for 3D Human Motion Reconstruction [61.833152949826946]
We propose a novel Ground-aware Motion Model for 3D Human Motion Reconstruction, named GraMMaR.
GraMMaR learns the distribution of transitions in both pose and interaction between every joint and ground plane at each time step of a motion sequence.
It is trained to explicitly promote consistency between the motion and distance change towards the ground.
arXiv Detail & Related papers (2023-06-29T07:22:20Z) - Aerial Monocular 3D Object Detection [67.20369963664314]
DVDET is proposed to achieve aerial monocular 3D object detection in both the 2D image space and the 3D physical space.
To address the severe view deformation issue, we propose a novel trainable geo-deformable transformation module.
To encourage more researchers to investigate this area, we will release the dataset and related code.
arXiv Detail & Related papers (2022-08-08T08:32:56Z) - Machine Learning in Orbit Estimation: a Survey [1.9336815376402723]
It is estimated that around one million objects larger than one cm are currently orbiting the Earth.
Current approximate physics-based methods have errors in the order of kilometers for seven-day predictions.
We provide an overview of the work in applying Machine Learning for Orbit Determination, Orbit Prediction, and atmospheric density modeling.
arXiv Detail & Related papers (2022-07-19T00:17:27Z) - Space Non-cooperative Object Active Tracking with Deep Reinforcement
Learning [1.212848031108815]
We propose an end-to-end active visual tracking method based on DQN algorithm, named as DRLAVT.
It can guide the chasing spacecraft approach to arbitrary space non-cooperative target merely relied on color or RGBD images.
It significantly outperforms position-based visual servoing baseline algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN.
arXiv Detail & Related papers (2021-12-18T06:12:24Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Gravity-Aware Monocular 3D Human-Object Reconstruction [73.25185274561139]
This paper proposes a new approach for joint markerless 3D human motion capture and object trajectory estimation from monocular RGB videos.
We focus on scenes with objects partially observed during a free flight.
In the experiments, our approach achieves state-of-the-art accuracy in 3D human motion capture on various metrics.
arXiv Detail & Related papers (2021-08-19T17:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.