Space Non-cooperative Object Active Tracking with Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2112.09854v1
- Date: Sat, 18 Dec 2021 06:12:24 GMT
- Title: Space Non-cooperative Object Active Tracking with Deep Reinforcement
Learning
- Authors: Dong Zhou, Guanghui Sun, Wenxiao Lei
- Abstract summary: We propose an end-to-end active visual tracking method based on DQN algorithm, named as DRLAVT.
It can guide the chasing spacecraft approach to arbitrary space non-cooperative target merely relied on color or RGBD images.
It significantly outperforms position-based visual servoing baseline algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN.
- Score: 1.212848031108815
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Active visual tracking of space non-cooperative object is significant for
future intelligent spacecraft to realise space debris removal, asteroid
exploration, autonomous rendezvous and docking. However, existing works often
consider this task into different subproblems (e.g. image preprocessing,
feature extraction and matching, position and pose estimation, control law
design) and optimize each module alone, which are trivial and sub-optimal. To
this end, we propose an end-to-end active visual tracking method based on DQN
algorithm, named as DRLAVT. It can guide the chasing spacecraft approach to
arbitrary space non-cooperative target merely relied on color or RGBD images,
which significantly outperforms position-based visual servoing baseline
algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN. Extensive
experiments implemented with diverse network architectures, different
perturbations and multiple targets demonstrate the advancement and robustness
of DRLAVT. In addition, We further prove our method indeed learnt the motion
patterns of target with deep reinforcement learning through hundreds of
trial-and-errors.
Related papers
- Leveraging Neural Radiance Fields for Pose Estimation of an Unknown Space Object during Proximity Operations [14.624172952608653]
We present a novel method that enables an "off-the-shelf" spacecraft pose estimator to be applied on an unknown target.
We train the NeRF model using a sparse collection of images that depict the target, and in turn generate a large dataset that is diverse both in terms of viewpoint and illumination.
We demonstrate that our method successfully enables the training of an off-the-shelf spacecraft pose estimation network from a sparse set of images.
arXiv Detail & Related papers (2024-05-21T12:34:03Z) - TK-Planes: Tiered K-Planes with High Dimensional Feature Vectors for Dynamic UAV-based Scenes [58.180556221044235]
We present a new approach to bridge the domain gap between synthetic and real-world data for un- manned aerial vehicle (UAV)-based perception.
Our formu- lation is designed for dynamic scenes, consisting of moving objects or human actions.
We evaluate its performance on challenging datasets, including Okutama Action and UG2.
arXiv Detail & Related papers (2024-05-04T21:55:33Z) - OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - Space Debris: Are Deep Learning-based Image Enhancements part of the
Solution? [9.117415383776695]
The volume of space debris currently orbiting the Earth is reaching an unsustainable level at an accelerated pace.
The detection, tracking, identification, and differentiation between orbit-defined, registered spacecraft, and rogue/inactive space objects'', is critical to asset protection.
The primary objective of this work is to investigate the validity of Deep Neural Network (DNN) solutions to overcome the limitations and image artefacts most prevalent when captured with monocular cameras in the visible light spectrum.
arXiv Detail & Related papers (2023-08-01T09:38:41Z) - Geometric-aware Pretraining for Vision-centric 3D Object Detection [77.7979088689944]
We propose a novel geometric-aware pretraining framework called GAPretrain.
GAPretrain serves as a plug-and-play solution that can be flexibly applied to multiple state-of-the-art detectors.
We achieve 46.2 mAP and 55.5 NDS on the nuScenes val set using the BEVFormer method, with a gain of 2.7 and 2.1 points, respectively.
arXiv Detail & Related papers (2023-04-06T14:33:05Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for
Spatial-Aware Visual Representations [85.38562724999898]
We propose a 2D Image and 3D Point cloud Unsupervised pre-training strategy, called SimIPU.
Specifically, we develop a multi-modal contrastive learning framework that consists of an intra-modal spatial perception module and an inter-modal feature interaction module.
To the best of our knowledge, this is the first study to explore contrastive learning pre-training strategies for outdoor multi-modal datasets.
arXiv Detail & Related papers (2021-12-09T03:27:00Z) - Multi-Object Tracking with Deep Learning Ensemble for Unmanned Aerial
System Applications [0.0]
Multi-object tracking (MOT) is a crucial component of situational awareness in military defense applications.
We present a robust object tracking architecture aimed to accommodate for the noise in real-time situations.
We propose a kinematic prediction model, called Deep Extended Kalman Filter (DeepEKF), in which a sequence-to-sequence architecture is used to predict entity trajectories in latent space.
arXiv Detail & Related papers (2021-10-05T13:50:38Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - LSPnet: A 2D Localization-oriented Spacecraft Pose Estimation Neural
Network [10.6872574091924]
This work explores a novel methodology, using Convolutional Neural Networks (CNNs) for estimating the pose of uncooperative spacecrafts.
Contrary to other approaches, the proposed CNN directly regresses poses without needing any prior 3D information.
The performed experiments show how this work competes with the state-of-the-art in uncooperative spacecraft pose estimation.
arXiv Detail & Related papers (2021-04-19T12:46:05Z) - Latent Space Roadmap for Visual Action Planning of Deformable and Rigid
Object Manipulation [74.88956115580388]
Planning is performed in a low-dimensional latent state space that embeds images.
Our framework consists of two main components: a Visual Foresight Module (VFM) that generates a visual plan as a sequence of images, and an Action Proposal Network (APN) that predicts the actions between them.
arXiv Detail & Related papers (2020-03-19T18:43:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.