Modular Neural Network Policies for Learning In-Flight Object Catching
with a Robot Hand-Arm System
- URL: http://arxiv.org/abs/2312.13987v1
- Date: Thu, 21 Dec 2023 16:20:12 GMT
- Title: Modular Neural Network Policies for Learning In-Flight Object Catching
with a Robot Hand-Arm System
- Authors: Wenbin Hu, Fernando Acero, Eleftherios Triantafyllidis, Zhaocheng Liu,
Zhibin Li
- Abstract summary: We present a modular framework designed to enable a robot hand-arm system to learn how to catch flying objects.
Our framework consists of five core modules: (i) an object state estimator that learns object trajectory prediction, (ii) a catching pose quality network that learns to score and rank object poses for catching, (iii) a reaching control policy trained to move the robot hand to pre-catch poses, and (iv) a grasping control policy trained to perform soft catching motions.
We conduct extensive evaluations of our framework in simulation for each module and the integrated system, to demonstrate high success rates of in-flight
- Score: 55.94648383147838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a modular framework designed to enable a robot hand-arm system to
learn how to catch flying objects, a task that requires fast, reactive, and
accurately-timed robot motions. Our framework consists of five core modules:
(i) an object state estimator that learns object trajectory prediction, (ii) a
catching pose quality network that learns to score and rank object poses for
catching, (iii) a reaching control policy trained to move the robot hand to
pre-catch poses, (iv) a grasping control policy trained to perform soft
catching motions for safe and robust grasping, and (v) a gating network trained
to synthesize the actions given by the reaching and grasping policy. The former
two modules are trained via supervised learning and the latter three use deep
reinforcement learning in a simulated environment. We conduct extensive
evaluations of our framework in simulation for each module and the integrated
system, to demonstrate high success rates of in-flight catching and robustness
to perturbations and sensory noise. Whilst only simple cylindrical and
spherical objects are used for training, the integrated system shows successful
generalization to a variety of household objects that are not used in training.
Related papers
- Stimulating Imagination: Towards General-purpose Object Rearrangement [2.0885207827639785]
General-purpose object placement is a fundamental capability of intelligent robots.
We propose a framework named SPORT to accomplish this task.
Sport learns a diffusion-based 3D pose estimator to ensure physically-realistic results.
A set of simulation and real-world experiments demonstrate the potential of our approach to accomplish general-purpose object rearrangement.
arXiv Detail & Related papers (2024-08-03T03:53:05Z) - Towards Real-World Efficiency: Domain Randomization in Reinforcement Learning for Pre-Capture of Free-Floating Moving Targets by Autonomous Robots [0.0]
We introduce a deep reinforcement learning-based control approach to address the intricate challenge of the robotic pre-grasping phase under microgravity conditions.
Our methodology incorporates an off-policy reinforcement learning framework, employing the soft actor-critic technique to enable the gripper to proficiently approach a free-floating moving object.
For effective learning of the pre-grasping approach task, we developed a reward function that offers the agent clear and insightful feedback.
arXiv Detail & Related papers (2024-06-10T16:54:51Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Silver-Bullet-3D at ManiSkill 2021: Learning-from-Demonstrations and
Heuristic Rule-based Methods for Object Manipulation [118.27432851053335]
This paper presents an overview and comparative analysis of our systems designed for the following two tracks in SAPIEN ManiSkill Challenge 2021: No Interaction Track.
The No Interaction track targets for learning policies from pre-collected demonstration trajectories.
In this track, we design a Heuristic Rule-based Method (HRM) to trigger high-quality object manipulation by decomposing the task into a series of sub-tasks.
For each sub-task, the simple rule-based controlling strategies are adopted to predict actions that can be applied to robotic arms.
arXiv Detail & Related papers (2022-06-13T16:20:42Z) - FlowBot3D: Learning 3D Articulation Flow to Manipulate Articulated Objects [14.034256001448574]
We propose a vision-based system that learns to predict the potential motions of the parts of a variety of articulated objects.
We deploy an analytical motion planner based on this vector field to achieve a policy that yields maximum articulation.
Results show that our system achieves state-of-the-art performance in both simulated and real-world experiments.
arXiv Detail & Related papers (2022-05-09T15:35:33Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - Distributed Reinforcement Learning of Targeted Grasping with Active
Vision for Mobile Manipulators [4.317864702902075]
We present the first RL-based system for a mobile manipulator that can (a) achieve targeted grasping generalizing to unseen target objects, (b) learn complex grasping strategies for cluttered scenes with occluded objects, and (c) perform active vision through its movable wrist camera to better locate objects.
We train and evaluate our system in a simulated environment, identify key components for improving performance, analyze its behaviors, and transfer to a real-world setup.
arXiv Detail & Related papers (2020-07-16T02:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.