simPLE: a visuotactile method learned in simulation to precisely pick,
localize, regrasp, and place objects
- URL: http://arxiv.org/abs/2307.13133v1
- Date: Mon, 24 Jul 2023 21:22:58 GMT
- Title: simPLE: a visuotactile method learned in simulation to precisely pick,
localize, regrasp, and place objects
- Authors: Maria Bauza, Antonia Bronars, Yifan Hou, Ian Taylor, Nikhil
Chavan-Dafle, Alberto Rodriguez
- Abstract summary: This paper explores solutions for precise and general pick-and-place.
We propose simPLE as a solution to precise pick-and-place.
SimPLE learns to pick, regrasp and place objects precisely, given only the object CAD model and no prior experience.
- Score: 16.178331266949293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing robotic systems have a clear tension between generality and
precision. Deployed solutions for robotic manipulation tend to fall into the
paradigm of one robot solving a single task, lacking precise generalization,
i.e., the ability to solve many tasks without compromising on precision. This
paper explores solutions for precise and general pick-and-place. In precise
pick-and-place, i.e. kitting, the robot transforms an unstructured arrangement
of objects into an organized arrangement, which can facilitate further
manipulation. We propose simPLE (simulation to Pick Localize and PLacE) as a
solution to precise pick-and-place. simPLE learns to pick, regrasp and place
objects precisely, given only the object CAD model and no prior experience. We
develop three main components: task-aware grasping, visuotactile perception,
and regrasp planning. Task-aware grasping computes affordances of grasps that
are stable, observable, and favorable to placing. The visuotactile perception
model relies on matching real observations against a set of simulated ones
through supervised learning. Finally, we compute the desired robot motion by
solving a shortest path problem on a graph of hand-to-hand regrasps. On a
dual-arm robot equipped with visuotactile sensing, we demonstrate
pick-and-place of 15 diverse objects with simPLE. The objects span a wide range
of shapes and simPLE achieves successful placements into structured
arrangements with 1mm clearance over 90% of the time for 6 objects, and over
80% of the time for 11 objects. Videos are available at
http://mcube.mit.edu/research/simPLE.html .
Related papers
- Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - Counting Objects in a Robotic Hand [6.057565013011719]
A robot performing multi-object grasping needs to sense the number of objects in the hand after grasping.
This paper presents a data-driven contrastive learning-based counting classifier with a modified loss function.
The proposed contrastive learning-based counting approach achieved above 96% accuracy for all three objects in the real setup.
arXiv Detail & Related papers (2024-04-09T21:46:14Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - DiffSkill: Skill Abstraction from Differentiable Physics for Deformable
Object Manipulations with Tools [96.38972082580294]
DiffSkill is a novel framework that uses a differentiable physics simulator for skill abstraction to solve deformable object manipulation tasks.
In particular, we first obtain short-horizon skills using individual tools from a gradient-based simulator.
We then learn a neural skill abstractor from the demonstration trajectories which takes RGBD images as input.
arXiv Detail & Related papers (2022-03-31T17:59:38Z) - IFOR: Iterative Flow Minimization for Robotic Object Rearrangement [92.97142696891727]
IFOR, Iterative Flow Minimization for Robotic Object Rearrangement, is an end-to-end method for the problem of object rearrangement for unknown objects.
We show that our method applies to cluttered scenes, and in the real world, while training only on synthetic data.
arXiv Detail & Related papers (2022-02-01T20:03:56Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - Learning to Regrasp by Learning to Place [19.13976401970985]
Regrasping is needed when a robot's current grasp pose fails to perform desired manipulation tasks.
We propose a system for robots to take partial point clouds of an object and the supporting environment as inputs and output a sequence of pick-and-place operations.
We show that our system is able to achieve 73.3% success rate of regrasping diverse objects.
arXiv Detail & Related papers (2021-09-18T03:07:06Z) - Predicting Stable Configurations for Semantic Placement of Novel Objects [37.18437299513799]
Our goal is to enable robots to repose previously unseen objects according to learned semantic relationships in novel environments.
We build our models and training from the ground up to be tightly integrated with our proposed planning algorithm for semantic placement of unknown objects.
Our approach enables motion planning for semantic rearrangement of unknown objects in scenes with varying geometry from only RGB-D sensing.
arXiv Detail & Related papers (2021-08-26T23:05:05Z) - Nothing But Geometric Constraints: A Model-Free Method for Articulated
Object Pose Estimation [89.82169646672872]
We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori.
We combine a classical geometric formulation with deep learning and extend the use of epipolar multi-rigid-body constraints to solve this task.
arXiv Detail & Related papers (2020-11-30T20:46:48Z) - Towards Robotic Assembly by Predicting Robust, Precise and Task-oriented
Grasps [17.07993278175686]
We propose a method to optimize for grasp, precision, and task performance by learning three cascaded networks.
We evaluate our method in simulation on three common assembly tasks: inserting gears onto pegs, aligning brackets into corners, and inserting shapes into slots.
arXiv Detail & Related papers (2020-11-04T18:29:01Z) - Low Dimensional State Representation Learning with Reward-shaped Priors [7.211095654886105]
We propose a method that aims at learning a mapping from the observations into a lower-dimensional state space.
This mapping is learned with unsupervised learning using loss functions shaped to incorporate prior knowledge of the environment and the task.
We test the method on several mobile robot navigation tasks in a simulation environment and also on a real robot.
arXiv Detail & Related papers (2020-07-29T13:00:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.