Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed
Environments
- URL: http://arxiv.org/abs/2303.03365v1
- Date: Mon, 6 Mar 2023 18:49:59 GMT
- Title: Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed
Environments
- Authors: Jun Yamada, Jack Collins, Ingmar Posner
- Abstract summary: We propose a system for efficient skill acquisition that leverages an object-centric generative model (OCGM) for versatile goal identification.
OCGM enables one-shot target object identification and re-identification in new scenes, allowing MP to guide the robot to the target object while avoiding obstacles.
- Score: 18.348489257164356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data efficiency in robotic skill acquisition is crucial for operating robots
in varied small-batch assembly settings. To operate in such environments,
robots must have robust obstacle avoidance and versatile goal conditioning
acquired from only a few simple demonstrations. Existing approaches, however,
fall short of these requirements. Deep reinforcement learning (RL) enables a
robot to learn complex manipulation tasks but is often limited to small task
spaces in the real world due to sample inefficiency and safety concerns. Motion
planning (MP) can generate collision-free paths in obstructed environments, but
cannot solve complex manipulation tasks and requires goal states often
specified by a user or object-specific pose estimator. In this work, we propose
a system for efficient skill acquisition that leverages an object-centric
generative model (OCGM) for versatile goal identification to specify a goal for
MP combined with RL to solve complex manipulation tasks in obstructed
environments. Specifically, OCGM enables one-shot target object identification
and re-identification in new scenes, allowing MP to guide the robot to the
target object while avoiding obstacles. This is combined with a skill
transition network, which bridges the gap between terminal states of MP and
feasible start states of a sample-efficient RL policy. The experiments
demonstrate that our OCGM-based one-shot goal identification provides
competitive accuracy to other baseline approaches and that our modular
framework outperforms competitive baselines, including a state-of-the-art RL
algorithm, by a significant margin for complex manipulation tasks in obstructed
environments.
Related papers
- COHERENT: Collaboration of Heterogeneous Multi-Robot System with Large Language Models [49.24666980374751]
COHERENT is a novel LLM-based task planning framework for collaboration of heterogeneous multi-robot systems.
A Proposal-Execution-Feedback-Adjustment mechanism is designed to decompose and assign actions for individual robots.
The experimental results show that our work surpasses the previous methods by a large margin in terms of success rate and execution efficiency.
arXiv Detail & Related papers (2024-09-23T15:53:41Z) - Sparse Diffusion Policy: A Sparse, Reusable, and Flexible Policy for Robot Learning [61.294110816231886]
We introduce a sparse, reusable, and flexible policy, Sparse Diffusion Policy (SDP)
SDP selectively activates experts and skills, enabling efficient and task-specific learning without retraining the entire model.
Demos and codes can be found in https://forrest-110.io/sparse_diffusion_policy/.
arXiv Detail & Related papers (2024-07-01T17:59:56Z) - GenCHiP: Generating Robot Policy Code for High-Precision and Contact-Rich Manipulation Tasks [28.556818911535498]
Large Language Models (LLMs) have been successful at generating robot policy code, but so far these results have been limited to high-level tasks.
We find that, with the right action space, LLMs are capable of successfully generating policies for a variety of contact-rich and high-precision manipulation tasks.
arXiv Detail & Related papers (2024-04-09T22:47:25Z) - Enhancing Robotic Navigation: An Evaluation of Single and
Multi-Objective Reinforcement Learning Strategies [0.9208007322096532]
This study presents a comparative analysis between single-objective and multi-objective reinforcement learning methods for training a robot to navigate effectively to an end goal.
By modifying the reward function to return a vector of rewards, each pertaining to a distinct objective, the robot learns a policy that effectively balances the different goals.
arXiv Detail & Related papers (2023-12-13T08:00:26Z) - AdverSAR: Adversarial Search and Rescue via Multi-Agent Reinforcement
Learning [4.843554492319537]
We propose an algorithm that allows robots to efficiently coordinate their strategies in the presence of adversarial inter-agent communications.
It is assumed that the robots have no prior knowledge of the target locations, and they can interact with only a subset of neighboring robots at any time.
The effectiveness of our approach is demonstrated on a collection of prototype grid-world environments.
arXiv Detail & Related papers (2022-12-20T08:13:29Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z) - Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep
Reinforcement Learning Approach [4.045850174820418]
We propose a learning-based method to solve peg-in-hole tasks with position uncertainty of the hole.
Our proposed learning framework for position-controlled robots was extensively evaluated on contact-rich insertion tasks.
arXiv Detail & Related papers (2020-08-24T06:53:19Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.