E2R: a Hierarchical-Learning inspired Novelty-Search method to generate diverse repertoires of grasping trajectories
- URL: http://arxiv.org/abs/2210.07887v2
- Date: Wed, 17 Apr 2024 14:20:02 GMT
- Title: E2R: a Hierarchical-Learning inspired Novelty-Search method to generate diverse repertoires of grasping trajectories
- Authors: Johann Huber, Oumar Sane, Alex Coninx, Faiz Ben Amar, Stephane Doncieux,
- Abstract summary: We introduce a new NS-based method that can generate large datasets of grasping trajectories in a platform-agnostic manner.
Inspired by the hierarchical learning paradigm, our method decouples approach and prehension to make the behavioral space smoother.
Experiments conducted on 3 different robot-gripper setups and on several standard objects shows that our method outperforms state-of-the-art.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robotics grasping refers to the task of making a robotic system pick an object by applying forces and torques on its surface. Despite the recent advances in data-driven approaches, grasping remains an unsolved problem. Most of the works on this task are relying on priors and heavy constraints to avoid the exploration problem. Novelty Search (NS) refers to evolutionary algorithms that replace selection of best performing individuals with selection of the most novel ones. Such methods have already shown promising results on hard exploration problems. In this work, we introduce a new NS-based method that can generate large datasets of grasping trajectories in a platform-agnostic manner. Inspired by the hierarchical learning paradigm, our method decouples approach and prehension to make the behavioral space smoother. Experiments conducted on 3 different robot-gripper setups and on several standard objects shows that our method outperforms state-of-the-art for generating diverse repertoire of grasping trajectories, getting a higher successful run ratio, as well as a better diversity for both approach and prehension. Some of the generated solutions have been successfully deployed on a real robot, showing the exploitability of the obtained repertoires.
Related papers
- Contribution \`a l'Optimisation d'un Comportement Collectif pour un
Groupe de Robots Autonomes [0.0]
This thesis studies the domain of collective robotics, and more particularly the optimization problems of multirobot systems.
The first contribution is the use of the Butterfly Algorithm Optimization (BOA) to solve the Unknown Area Exploration problem.
The second contribution is the development of a new simulation framework for benchmarking dynamic incremental problems in robotics.
arXiv Detail & Related papers (2023-06-10T21:49:08Z) - Multi-Robot Path Planning Combining Heuristics and Multi-Agent
Reinforcement Learning [0.0]
In the movement process, robots need to avoid collisions with other moving robots while minimizing their travel distance.
Previous methods for this problem either continuously replan paths using search methods to avoid conflicts or choose appropriate collision avoidance strategies based on learning approaches.
We propose a path planning method, MAPPOHR, which combines a search, empirical rules, and multi-agent reinforcement learning.
arXiv Detail & Related papers (2023-06-02T05:07:37Z) - Continuous Trajectory Generation Based on Two-Stage GAN [50.55181727145379]
We propose a novel two-stage generative adversarial framework to generate the continuous trajectory on the road network.
Specifically, we build the generator under the human mobility hypothesis of the A* algorithm to learn the human mobility behavior.
For the discriminator, we combine the sequential reward with the mobility yaw reward to enhance the effectiveness of the generator.
arXiv Detail & Related papers (2023-01-16T09:54:02Z) - Meta Navigator: Search for a Good Adaptation Policy for Few-shot
Learning [113.05118113697111]
Few-shot learning aims to adapt knowledge learned from previous tasks to novel tasks with only a limited amount of labeled data.
Research literature on few-shot learning exhibits great diversity, while different algorithms often excel at different few-shot learning scenarios.
We present Meta Navigator, a framework that attempts to solve the limitation in few-shot learning by seeking a higher-level strategy.
arXiv Detail & Related papers (2021-09-13T07:20:01Z) - Diversity-based Trajectory and Goal Selection with Hindsight Experience
Replay [8.259694128526112]
We propose diversity-based trajectory and goal selection with HER (DTGSH)
We show that our method can learn more quickly and reach higher performance than other state-of-the-art approaches on all tasks.
arXiv Detail & Related papers (2021-08-17T21:34:24Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z) - Reward Conditioned Neural Movement Primitives for Population Based
Variational Policy Optimization [4.559353193715442]
This paper studies the reward based policy exploration problem in a supervised learning approach.
We show that our method provides stable learning progress and significant sample efficiency compared to a number of state-of-the-art robotic reinforcement learning methods.
arXiv Detail & Related papers (2020-11-09T09:53:37Z) - Reinforcement Learning with Fast Stabilization in Linear Dynamical
Systems [91.43582419264763]
We study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems.
We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment.
We show that the proposed algorithm attains $tildemathcalO(sqrtT)$ regret after $T$ time steps of agent-environment interaction.
arXiv Detail & Related papers (2020-07-23T23:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.