GraspARL: Dynamic Grasping via Adversarial Reinforcement Learning
- URL: http://arxiv.org/abs/2203.02119v1
- Date: Fri, 4 Mar 2022 03:25:09 GMT
- Title: GraspARL: Dynamic Grasping via Adversarial Reinforcement Learning
- Authors: Tianhao Wu, Fangwei Zhong, Yiran Geng, Hongchen Wang, Yongjian Zhu,
Yizhou Wang, Hao Dong
- Abstract summary: We introduce an adversarial reinforcement learning framework for dynamic grasping, namely GraspARL.
We formulate the dynamic grasping problem as a'move-and-grasp' game, where the robot is to pick up the object on the mover and the adversarial mover is to find a path to escape it.
In this way, the mover can auto-generate diverse moving trajectories while training. And the robot trained with the adversarial trajectories can generalize to various motion patterns.
- Score: 16.03016392075486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grasping moving objects, such as goods on a belt or living animals, is an
important but challenging task in robotics. Conventional approaches rely on a
set of manually defined object motion patterns for training, resulting in poor
generalization to unseen object trajectories. In this work, we introduce an
adversarial reinforcement learning framework for dynamic grasping, namely
GraspARL. To be specific. we formulate the dynamic grasping problem as a
'move-and-grasp' game, where the robot is to pick up the object on the mover
and the adversarial mover is to find a path to escape it. Hence, the two agents
play a min-max game and are trained by reinforcement learning. In this way, the
mover can auto-generate diverse moving trajectories while training. And the
robot trained with the adversarial trajectories can generalize to various
motion patterns. Empirical results on the simulator and real-world scenario
demonstrate the effectiveness of each and good generalization of our method.
Related papers
- DexDribbler: Learning Dexterous Soccer Manipulation via Dynamic Supervision [26.9579556496875]
Joint manipulation of moving objects and locomotion with legs, such as playing soccer, receive scant attention in the learning community.
We propose a feedback control block to compute the necessary body-level movement accurately and using the outputs as dynamic joint-level locomotion supervision.
We observe that our learning scheme can not only make the policy network converge faster but also enable soccer robots to perform sophisticated maneuvers.
arXiv Detail & Related papers (2024-03-21T11:16:28Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Causal Policy Gradient for Whole-Body Mobile Manipulation [39.3461626518495]
We introduce Causal MoMa, a new reinforcement learning framework to train policies for typical MoMa tasks.
We evaluate the performance of Causal MoMa on three types of simulated robots across different MoMa tasks.
arXiv Detail & Related papers (2023-05-04T23:23:47Z) - Synthesizing Physical Character-Scene Interactions [64.26035523518846]
It is necessary to synthesize such interactions between virtual characters and their surroundings.
We present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters.
Our approach takes physics-based character motion generation a step closer to broad applicability.
arXiv Detail & Related papers (2023-02-02T05:21:32Z) - Automatic Acquisition of a Repertoire of Diverse Grasping Trajectories
through Behavior Shaping and Novelty Search [0.0]
We introduce an approach to generate diverse grasping movements in order to solve this problem.
The movements are generated in simulation, for particular object positions.
Although we show that generated movements actually work on a real Baxter robot, the aim is to use this method to create a large dataset to bootstrap deep learning methods.
arXiv Detail & Related papers (2022-05-17T09:17:31Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Hierarchical Reinforcement Learning of Locomotion Policies in Response
to Approaching Objects: A Preliminary Study [11.919315372249802]
Deep reinforcement learning has enabled complex kinematic systems such as humanoid robots to move from point A to point B.
Inspired by the observation of the innate reactive behavior of animals in nature, we hope to extend this progress in robot locomotion.
We build a simulation environment in MuJoCo where a legged robot must avoid getting hit by a ball moving toward it.
arXiv Detail & Related papers (2022-03-20T18:24:18Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Learning Bipedal Robot Locomotion from Human Movement [0.791553652441325]
We present a reinforcement learning based method for teaching a real world bipedal robot to perform movements directly from motion capture data.
Our method seamlessly transitions from training in a simulation environment to executing on a physical robot.
We demonstrate our method on an internally developed humanoid robot with movements ranging from a dynamic walk cycle to complex balancing and waving.
arXiv Detail & Related papers (2021-05-26T00:49:37Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.