Graph-based Reinforcement Learning meets Mixed Integer Programs: An
application to 3D robot assembly discovery
- URL: http://arxiv.org/abs/2203.04120v1
- Date: Tue, 8 Mar 2022 14:44:51 GMT
- Title: Graph-based Reinforcement Learning meets Mixed Integer Programs: An
application to 3D robot assembly discovery
- Authors: Niklas Funk, Svenja Menzenbach, Georgia Chalvatzaki, Jan Peters
- Abstract summary: We tackle the problem of building arbitrary, predefined target structures entirely from scratch using a set of Tetris-like building blocks and a robotic manipulator.
Our novel hierarchical approach aims at efficiently decomposing the overall task into three feasible levels that benefit mutually from each other.
- Score: 34.25379651790627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robot assembly discovery is a challenging problem that lives at the
intersection of resource allocation and motion planning. The goal is to combine
a predefined set of objects to form something new while considering task
execution with the robot-in-the-loop. In this work, we tackle the problem of
building arbitrary, predefined target structures entirely from scratch using a
set of Tetris-like building blocks and a robotic manipulator. Our novel
hierarchical approach aims at efficiently decomposing the overall task into
three feasible levels that benefit mutually from each other. On the high level,
we run a classical mixed-integer program for global optimization of block-type
selection and the blocks' final poses to recreate the desired shape. Its output
is then exploited to efficiently guide the exploration of an underlying
reinforcement learning (RL) policy. This RL policy draws its generalization
properties from a flexible graph-based representation that is learned through
Q-learning and can be refined with search. Moreover, it accounts for the
necessary conditions of structural stability and robotic feasibility that
cannot be effectively reflected in the previous layer. Lastly, a grasp and
motion planner transforms the desired assembly commands into robot joint
movements. We demonstrate the performance of the proposed method on a set of
competitive simulated robot assembly discovery environments and report
performance and robustness gains compared to an unstructured end-to-end
approach. Videos are available at https://sites.google.com/view/rl-meets-milp .
Related papers
- AssemblyComplete: 3D Combinatorial Construction with Deep Reinforcement Learning [4.3507834596906125]
A critical goal in robotics is to teach robots to adapt to real-world collaborative tasks, particularly in automatic assembly.
This paper introduces 3D assembly completion, which is demonstrated using unit primitives (i.e., Lego bricks)
We propose a two-part deep reinforcement learning (DRL) framework that tackles teaching the robot to understand the objective of an incomplete assembly and learning a construction policy to complete the assembly.
arXiv Detail & Related papers (2024-10-20T18:51:17Z) - Generalize by Touching: Tactile Ensemble Skill Transfer for Robotic Furniture Assembly [24.161856591498825]
Tactile Ensemble Skill Transfer (TEST) is a pioneering offline reinforcement learning (RL) approach that incorporates tactile feedback in the control loop.
TEST's core design is to learn a skill transition model for high-level planning, along with a set of adaptive intra-skill goal-reaching policies.
Results indicate that TEST can achieve a success rate of 90% and is over 4 times more efficient than the generalization policy.
arXiv Detail & Related papers (2024-04-26T20:27:10Z) - Cognitive Planning for Object Goal Navigation using Generative AI Models [0.979851640406258]
We present a novel framework for solving the object goal navigation problem that generates efficient exploration strategies.
Our approach enables a robot to navigate unfamiliar environments by leveraging Large Language Models (LLMs) and Large Vision-Language Models (LVLMs)
arXiv Detail & Related papers (2024-03-30T10:54:59Z) - RPMArt: Towards Robust Perception and Manipulation for Articulated Objects [56.73978941406907]
We propose a framework towards Robust Perception and Manipulation for Articulated Objects ( RPMArt)
RPMArt learns to estimate the articulation parameters and manipulate the articulation part from the noisy point cloud.
We introduce an articulation-aware classification scheme to enhance its ability for sim-to-real transfer.
arXiv Detail & Related papers (2024-03-24T05:55:39Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Multi-level Reasoning for Robotic Assembly: From Sequence Inference to
Contact Selection [74.40109927350856]
We present the Part Assembly Sequence Transformer (PAST) to infer assembly sequences from a target blueprint.
We then use a motion planner and optimization to generate part movements and contacts.
Experimental results show that our approach generalizes better than prior methods.
arXiv Detail & Related papers (2023-12-17T00:47:13Z) - Efficient and Feasible Robotic Assembly Sequence Planning via Graph
Representation Learning [22.447462847331312]
We propose a holistic graphical approach including a graph representation called Assembly Graph for product assemblies.
With GRACE, we are able to extract meaningful information from the graph input and predict assembly sequences in a step-by-step manner.
In experiments, we show that our approach can predict feasible assembly sequences across product variants of aluminum profiles.
arXiv Detail & Related papers (2023-03-17T17:23:14Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - A Long Horizon Planning Framework for Manipulating Rigid Pointcloud
Objects [25.428781562909606]
We present a framework for solving long-horizon planning problems involving manipulation of rigid objects.
Our method plans in the space of object subgoals and frees the planner from reasoning about robot-object interaction dynamics.
arXiv Detail & Related papers (2020-11-16T18:59:33Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.