A Learning System for Motion Planning of Free-Float Dual-Arm Space
Manipulator towards Non-Cooperative Object
- URL: http://arxiv.org/abs/2207.02464v1
- Date: Wed, 6 Jul 2022 06:22:34 GMT
- Title: A Learning System for Motion Planning of Free-Float Dual-Arm Space
Manipulator towards Non-Cooperative Object
- Authors: Shengjie Wang, Yuxue Cao, Xiang Zheng, Tao Zhang
- Abstract summary: We propose a learning system for motion planning of free-float dual-arm space manipulator (FFDASM) towards non-cooperative objects.
Module I realizes the multi-target trajectory planning for two end-effectors within a large target space.
Module II takes as input the point clouds of the non-cooperative object to estimate the motional property, and then can predict the position of target points on a non-cooperative object.
- Score: 13.289739243378245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen the emergence of non-cooperative objects in space,
like failed satellites and space junk. These objects are usually operated or
collected by free-float dual-arm space manipulators. Thanks to eliminating the
difficulties of modeling and manual parameter-tuning, reinforcement learning
(RL) methods have shown a more promising sign in the trajectory planning of
space manipulators. Although previous studies demonstrate their effectiveness,
they cannot be applied in tracking dynamic targets with unknown rotation
(non-cooperative objects). In this paper, we proposed a learning system for
motion planning of free-float dual-arm space manipulator (FFDASM) towards
non-cooperative objects. Specifically, our method consists of two modules.
Module I realizes the multi-target trajectory planning for two end-effectors
within a large target space. Next, Module II takes as input the point clouds of
the non-cooperative object to estimate the motional property, and then can
predict the position of target points on an non-cooperative object. We
leveraged the combination of Module I and Module II to track target points on a
spinning object with unknown regularity successfully. Furthermore, the
experiments also demonstrate the scalability and generalization of our learning
system.
Related papers
- FLD: Fourier Latent Dynamics for Structured Motion Representation and
Learning [19.491968038335944]
We introduce a self-supervised, structured representation and generation method that extracts spatial-temporal relationships in periodic or quasi-periodic motions.
Our work opens new possibilities for future advancements in general motion representation and learning algorithms.
arXiv Detail & Related papers (2024-02-21T13:59:21Z) - Modular Neural Network Policies for Learning In-Flight Object Catching
with a Robot Hand-Arm System [55.94648383147838]
We present a modular framework designed to enable a robot hand-arm system to learn how to catch flying objects.
Our framework consists of five core modules: (i) an object state estimator that learns object trajectory prediction, (ii) a catching pose quality network that learns to score and rank object poses for catching, (iii) a reaching control policy trained to move the robot hand to pre-catch poses, and (iv) a grasping control policy trained to perform soft catching motions.
We conduct extensive evaluations of our framework in simulation for each module and the integrated system, to demonstrate high success rates of in-flight
arXiv Detail & Related papers (2023-12-21T16:20:12Z) - MotionTrack: Learning Robust Short-term and Long-term Motions for
Multi-Object Tracking [56.92165669843006]
We propose MotionTrack, which learns robust short-term and long-term motions in a unified framework to associate trajectories from a short to long range.
For dense crowds, we design a novel Interaction Module to learn interaction-aware motions from short-term trajectories, which can estimate the complex movement of each target.
For extreme occlusions, we build a novel Refind Module to learn reliable long-term motions from the target's history trajectory, which can link the interrupted trajectory with its corresponding detection.
arXiv Detail & Related papers (2023-03-18T12:38:33Z) - Reinforcement Learning with Prior Policy Guidance for Motion Planning of
Dual-Arm Free-Floating Space Robot [11.272278713797537]
We propose a novel algorithm, Efficient, to facilitate RL-based methods to improve planning accuracy efficiently.
Our core contributions are constructing a mixed policy with prior knowledge guidance and introducing infinite norm to build a more reasonable reward function.
arXiv Detail & Related papers (2022-09-03T14:20:17Z) - Space Non-cooperative Object Active Tracking with Deep Reinforcement
Learning [1.212848031108815]
We propose an end-to-end active visual tracking method based on DQN algorithm, named as DRLAVT.
It can guide the chasing spacecraft approach to arbitrary space non-cooperative target merely relied on color or RGBD images.
It significantly outperforms position-based visual servoing baseline algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN.
arXiv Detail & Related papers (2021-12-18T06:12:24Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Imitation Learning for Autonomous Trajectory Learning of Robot Arms in
Space [13.64392246529041]
Concept of programming by demonstration or imitation learning is used for trajectory planning of manipulators mounted on small spacecraft.
For greater autonomy in future space missions and minimal human intervention through ground control, a robot arm having 7-Degrees of Freedom (DoF) is envisaged for carrying out multiple tasks like debris removal, on-orbit servicing and assembly.
arXiv Detail & Related papers (2020-08-10T10:18:04Z) - Latent Space Roadmap for Visual Action Planning of Deformable and Rigid
Object Manipulation [74.88956115580388]
Planning is performed in a low-dimensional latent state space that embeds images.
Our framework consists of two main components: a Visual Foresight Module (VFM) that generates a visual plan as a sequence of images, and an Action Proposal Network (APN) that predicts the actions between them.
arXiv Detail & Related papers (2020-03-19T18:43:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.