Automatic Acquisition of a Repertoire of Diverse Grasping Trajectories
through Behavior Shaping and Novelty Search
- URL: http://arxiv.org/abs/2205.08189v1
- Date: Tue, 17 May 2022 09:17:31 GMT
- Title: Automatic Acquisition of a Repertoire of Diverse Grasping Trajectories
through Behavior Shaping and Novelty Search
- Authors: Aur\'elien Morel, Yakumo Kunimoto, Alex Coninx and St\'ephane Doncieux
- Abstract summary: We introduce an approach to generate diverse grasping movements in order to solve this problem.
The movements are generated in simulation, for particular object positions.
Although we show that generated movements actually work on a real Baxter robot, the aim is to use this method to create a large dataset to bootstrap deep learning methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grasping a particular object may require a dedicated grasping movement that
may also be specific to the robot end-effector. No generic and autonomous
method does exist to generate these movements without making hypotheses on the
robot or on the object. Learning methods could help to autonomously discover
relevant grasping movements, but they face an important issue: grasping
movements are so rare that a learning method based on exploration has little
chance to ever observe an interesting movement, thus creating a bootstrap
issue. We introduce an approach to generate diverse grasping movements in order
to solve this problem. The movements are generated in simulation, for
particular object positions. We test it on several simulated robots: Baxter,
Pepper and a Kuka Iiwa arm. Although we show that generated movements actually
work on a real Baxter robot, the aim is to use this method to create a large
dataset to bootstrap deep learning methods.
Related papers
- Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Naturalistic Robot Arm Trajectory Generation via Representation Learning [4.7682079066346565]
Integration of manipulator robots in household environments suggests a need for more predictable human-like robot motion.
One method of generating naturalistic motion trajectories is via imitation of human demonstrators.
This paper explores a self-supervised imitation learning method using an autoregressive neural network for an assistive drinking task.
arXiv Detail & Related papers (2023-09-14T09:26:03Z) - Dynamic Handover: Throw and Catch with Bimanual Hands [30.206469112964033]
We design a system with two multi-finger hands attached to robot arms to solve this problem.
We train our system using Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer to deploy on the real robots.
To overcome the Sim2Real gap, we provide multiple novel algorithm designs including learning a trajectory prediction model for the object.
arXiv Detail & Related papers (2023-09-11T17:49:25Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - GraspARL: Dynamic Grasping via Adversarial Reinforcement Learning [16.03016392075486]
We introduce an adversarial reinforcement learning framework for dynamic grasping, namely GraspARL.
We formulate the dynamic grasping problem as a'move-and-grasp' game, where the robot is to pick up the object on the mover and the adversarial mover is to find a path to escape it.
In this way, the mover can auto-generate diverse moving trajectories while training. And the robot trained with the adversarial trajectories can generalize to various motion patterns.
arXiv Detail & Related papers (2022-03-04T03:25:09Z) - Learning Riemannian Manifolds for Geodesic Motion Skills [19.305285090233063]
We develop a learning framework that allows robots to learn new skills and adapt them to unseen situations.
We show how geodesic motion skills let a robot plan movements from and to arbitrary points on a data manifold.
We test our learning framework using a 7-DoF robotic manipulator, where the robot satisfactorily learns and reproduces realistic skills featuring elaborated motion patterns.
arXiv Detail & Related papers (2021-06-08T13:24:54Z) - Learning Bipedal Robot Locomotion from Human Movement [0.791553652441325]
We present a reinforcement learning based method for teaching a real world bipedal robot to perform movements directly from motion capture data.
Our method seamlessly transitions from training in a simulation environment to executing on a physical robot.
We demonstrate our method on an internally developed humanoid robot with movements ranging from a dynamic walk cycle to complex balancing and waving.
arXiv Detail & Related papers (2021-05-26T00:49:37Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.