Learning to Regrasp by Learning to Place
- URL: http://arxiv.org/abs/2109.08817v1
- Date: Sat, 18 Sep 2021 03:07:06 GMT
- Title: Learning to Regrasp by Learning to Place
- Authors: Shuo Cheng, Kaichun Mo, Lin Shao
- Abstract summary: Regrasping is needed when a robot's current grasp pose fails to perform desired manipulation tasks.
We propose a system for robots to take partial point clouds of an object and the supporting environment as inputs and output a sequence of pick-and-place operations.
We show that our system is able to achieve 73.3% success rate of regrasping diverse objects.
- Score: 19.13976401970985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore whether a robot can learn to regrasp a diverse set
of objects to achieve various desired grasp poses. Regrasping is needed
whenever a robot's current grasp pose fails to perform desired manipulation
tasks. Endowing robots with such an ability has applications in many domains
such as manufacturing or domestic services. Yet, it is a challenging task due
to the large diversity of geometry in everyday objects and the high
dimensionality of the state and action space. In this paper, we propose a
system for robots to take partial point clouds of an object and the supporting
environment as inputs and output a sequence of pick-and-place operations to
transform an initial object grasp pose to the desired object grasp poses. The
key technique includes a neural stable placement predictor and a regrasp graph
based solution through leveraging and changing the surrounding environment. We
introduce a new and challenging synthetic dataset for learning and evaluating
the proposed approach. In this dataset, we show that our system is able to
achieve 73.3% success rate of regrasping diverse objects.
Related papers
- Robotic Handling of Compliant Food Objects by Robust Learning from
Demonstration [79.76009817889397]
We propose a robust learning policy based on Learning from Demonstration (LfD) for robotic grasping of food compliant objects.
We present an LfD learning policy that automatically removes inconsistent demonstrations, and estimates the teacher's intended policy.
The proposed approach has a vast range of potential applications in the aforementioned industry sectors.
arXiv Detail & Related papers (2023-09-22T13:30:26Z) - simPLE: a visuotactile method learned in simulation to precisely pick,
localize, regrasp, and place objects [16.178331266949293]
This paper explores solutions for precise and general pick-and-place.
We propose simPLE as a solution to precise pick-and-place.
SimPLE learns to pick, regrasp and place objects precisely, given only the object CAD model and no prior experience.
arXiv Detail & Related papers (2023-07-24T21:22:58Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - OmniHang: Learning to Hang Arbitrary Objects using Contact Point
Correspondences and Neural Collision Estimation [14.989379991558046]
We propose a system that takes partial point clouds of an object and a supporting item as input and learns to decide where and how to hang the object stably.
Our system learns to estimate the contact point correspondences between the object and supporting item to get an estimated stable pose.
Then, the robot needs to find a collision-free path to move the object from its initial pose to stable hanging pose.
arXiv Detail & Related papers (2021-03-26T06:11:05Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Object Detection and Pose Estimation from RGB and Depth Data for
Real-time, Adaptive Robotic Grasping [0.0]
We propose a system that performs real-time object detection and pose estimation, for the purpose of dynamic robot grasping.
The proposed approach allows the robot to detect the object identity and its actual pose, and then adapt a canonical grasp in order to be used with the new pose.
For training, the system defines a canonical grasp by capturing the relative pose of an object with respect to the gripper attached to the robot's wrist.
During testing, once a new pose is detected, a canonical grasp for the object is identified and then dynamically adapted by adjusting the robot arm's joint angles.
arXiv Detail & Related papers (2021-01-18T22:22:47Z) - Autonomous Planning Based on Spatial Concepts to Tidy Up Home
Environments with Service Robots [5.739787445246959]
We propose a novel planning method that can efficiently estimate the order and positions of the objects to be tidied up by learning the parameters of a probabilistic generative model.
The model allows a robot to learn the distributions of the co-occurrence probability of the objects and places to tidy up using the multimodal sensor information collected in a tidied environment.
We evaluate the effectiveness of the proposed method by an experimental simulation that reproduces the conditions of the Tidy Up Here task of the World Robot Summit 2018 international robotics competition.
arXiv Detail & Related papers (2020-02-10T11:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.