Towards Robotic Assembly by Predicting Robust, Precise and Task-oriented
Grasps
- URL: http://arxiv.org/abs/2011.02462v1
- Date: Wed, 4 Nov 2020 18:29:01 GMT
- Title: Towards Robotic Assembly by Predicting Robust, Precise and Task-oriented
Grasps
- Authors: Jialiang Zhao, Daniel Troniak, Oliver Kroemer
- Abstract summary: We propose a method to optimize for grasp, precision, and task performance by learning three cascaded networks.
We evaluate our method in simulation on three common assembly tasks: inserting gears onto pegs, aligning brackets into corners, and inserting shapes into slots.
- Score: 17.07993278175686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust task-oriented grasp planning is vital for autonomous robotic precision
assembly tasks. Knowledge of the objects' geometry and preconditions of the
target task should be incorporated when determining the proper grasp to
execute. However, several factors contribute to the challenges of realizing
these grasps such as noise when controlling the robot, unknown object
properties, and difficulties modeling complex object-object interactions. We
propose a method that decomposes this problem and optimizes for grasp
robustness, precision, and task performance by learning three cascaded
networks. We evaluate our method in simulation on three common assembly tasks:
inserting gears onto pegs, aligning brackets into corners, and inserting shapes
into slots. Our policies are trained using a curriculum based on large-scale
self-supervised grasp simulations with procedurally generated objects. Finally,
we evaluate the performance of the first two tasks with a real robot where our
method achieves 4.28mm error for bracket insertion and 1.44mm error for gear
insertion.
Related papers
- Counting Objects in a Robotic Hand [6.057565013011719]
A robot performing multi-object grasping needs to sense the number of objects in the hand after grasping.
This paper presents a data-driven contrastive learning-based counting classifier with a modified loss function.
The proposed contrastive learning-based counting approach achieved above 96% accuracy for all three objects in the real setup.
arXiv Detail & Related papers (2024-04-09T21:46:14Z) - Learning Dual-arm Object Rearrangement for Cartesian Robots [28.329845378085054]
This work focuses on the dual-arm object rearrangement problem abstracted from a realistic industrial scenario of Cartesian robots.
The goal of this problem is to transfer all the objects from sources to targets with the minimum total completion time.
We develop an effective object-to-arm task assignment strategy for minimizing the cumulative task execution time and maximizing the dual-arm cooperation efficiency.
arXiv Detail & Related papers (2024-02-21T09:13:08Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Localizing Active Objects from Egocentric Vision with Symbolic World
Knowledge [62.981429762309226]
The ability to actively ground task instructions from an egocentric view is crucial for AI agents to accomplish tasks or assist humans virtually.
We propose to improve phrase grounding models' ability on localizing the active objects by: learning the role of objects undergoing change and extracting them accurately from the instructions.
We evaluate our framework on Ego4D and Epic-Kitchens datasets.
arXiv Detail & Related papers (2023-10-23T16:14:05Z) - simPLE: a visuotactile method learned in simulation to precisely pick,
localize, regrasp, and place objects [16.178331266949293]
This paper explores solutions for precise and general pick-and-place.
We propose simPLE as a solution to precise pick-and-place.
SimPLE learns to pick, regrasp and place objects precisely, given only the object CAD model and no prior experience.
arXiv Detail & Related papers (2023-07-24T21:22:58Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Efficient and Robust Training of Dense Object Nets for Multi-Object
Robot Manipulation [8.321536457963655]
We propose a framework for robust and efficient training of Dense Object Nets (DON)
We focus on training with multi-object data instead of singulated objects, combined with a well-chosen augmentation scheme.
We demonstrate the robustness and accuracy of our proposed framework on a real-world robotic grasping task.
arXiv Detail & Related papers (2022-06-24T08:24:42Z) - Graph-based Reinforcement Learning meets Mixed Integer Programs: An
application to 3D robot assembly discovery [34.25379651790627]
We tackle the problem of building arbitrary, predefined target structures entirely from scratch using a set of Tetris-like building blocks and a robotic manipulator.
Our novel hierarchical approach aims at efficiently decomposing the overall task into three feasible levels that benefit mutually from each other.
arXiv Detail & Related papers (2022-03-08T14:44:51Z) - Towards Coordinated Robot Motions: End-to-End Learning of Motion
Policies on Transform Trees [63.31965375413414]
We propose to solve multi-task problems through learning structured policies from human demonstrations.
Our structured policy is inspired by RMPflow, a framework for combining subtask policies on different spaces.
We derive an end-to-end learning objective function that is suitable for the multi-task problem.
arXiv Detail & Related papers (2020-12-24T22:46:22Z) - Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in a
First-person Simulated 3D Environment [73.9469267445146]
First-person object-interaction tasks in high-fidelity, 3D, simulated environments such as the AI2Thor pose significant sample-efficiency challenges for reinforcement learning agents.
We show that one can learn object-interaction tasks from scratch without supervision by learning an attentive object-model as an auxiliary task.
arXiv Detail & Related papers (2020-10-28T19:27:26Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.