Efficient State Abstraction using Object-centered Predicates for
Manipulation Planning
- URL: http://arxiv.org/abs/2007.08251v1
- Date: Thu, 16 Jul 2020 10:52:53 GMT
- Title: Efficient State Abstraction using Object-centered Predicates for
Manipulation Planning
- Authors: Alejandro Agostini, Dongheui Lee
- Abstract summary: We propose an object-centered representation that permits characterizing a much wider set of possible changes in configuration spaces.
Based on this representation, we define universal planning operators for picking and placing actions that permits generating plans with geometric and force consistency.
- Score: 86.24148040040885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The definition of symbolic descriptions that consistently represent relevant
geometrical aspects in manipulation tasks is a challenging problem that has
received little attention in the robotic community. This definition is usually
done from an observer perspective of a finite set of object relations and
orientations that only satisfy geometrical constraints to execute experiments
in laboratory conditions. This restricts the possible changes with manipulation
actions in the object configuration space to those compatible with that
particular external reference definitions, which greatly limits the spectrum of
possible manipulations. To tackle these limitations we propose an
object-centered representation that permits characterizing a much wider set of
possible changes in configuration spaces than the traditional observer
perspective counterpart. Based on this representation, we define universal
planning operators for picking and placing actions that permits generating
plans with geometric and force consistency in manipulation tasks. This
object-centered description is directly obtained from the poses and bounding
boxes of objects using a novel learning mechanisms that permits generating
signal-symbols relations without the need of handcrafting these relations for
each particular scenario.
Related papers
- LARS-VSA: A Vector Symbolic Architecture For Learning with Abstract Rules [1.3049516752695616]
We propose a "relational bottleneck" that separates object-level features from abstract rules, allowing learning from limited amounts of data.
We adapt the "relational bottleneck" strategy to a high-dimensional space, incorporating explicit vector binding operations between symbols and relational representations.
Our system benefits from the low overhead of operations in hyperdimensional space, making it significantly more efficient than the state of the art when evaluated on a variety of test datasets.
arXiv Detail & Related papers (2024-05-23T11:05:42Z) - "Set It Up!": Functional Object Arrangement with Compositional Generative Models [48.205899984212074]
We introduce a framework, SetItUp, for learning to interpret under-specified instructions.
We validate our framework on a dataset comprising study desks, dining tables, and coffee tables.
arXiv Detail & Related papers (2024-05-20T10:06:33Z) - Composable Part-Based Manipulation [61.48634521323737]
We propose composable part-based manipulation (CPM) to improve learning and generalization of robotic manipulation skills.
CPM comprises a collection of composable diffusion models, where each model captures a different inter-object correspondence.
We validate our approach in both simulated and real-world scenarios, demonstrating its effectiveness in achieving robust and generalized manipulation capabilities.
arXiv Detail & Related papers (2024-05-09T16:04:14Z) - Constrained Layout Generation with Factor Graphs [21.07236104467961]
We introduce a factor graph based approach with four latent variable nodes for each room, and a factor node for each constraint.
The factor nodes represent dependencies among the variables to which they are connected, effectively capturing constraints that are potentially of a higher order.
Our approach is simple and generates layouts faithful to the user requirements, demonstrated by a large improvement in IOU scores over existing methods.
arXiv Detail & Related papers (2024-03-30T14:58:40Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Object-Centric Conformance Alignments with Synchronization (Extended Version) [57.76661079749309]
We present a new formalism that combines the ability of object-centric Petri nets to capture one-to-many relations and the one of Petri nets with identifiers to compare and synchronize objects based on their identity.
We propose a conformance checking approach for such nets based on an encoding in satisfiability modulo theories (SMT)
arXiv Detail & Related papers (2023-12-13T21:53:32Z) - Learning Type-Generalized Actions for Symbolic Planning [4.670305538969915]
We propose a novel concept to generalize symbolic actions using a given entity hierarchy.
In a simulated grid-based kitchen environment, we show that type-generalized actions can be learned from few observations and generalize to novel situations.
arXiv Detail & Related papers (2023-08-09T11:01:46Z) - Neural Constraint Satisfaction: Hierarchical Abstraction for
Combinatorial Generalization in Object Rearrangement [75.9289887536165]
We present a hierarchical abstraction approach to uncover underlying entities.
We show how to learn a correspondence between intervening on states of entities in the agent's model and acting on objects in the environment.
We use this correspondence to develop a method for control that generalizes to different numbers and configurations of objects.
arXiv Detail & Related papers (2023-03-20T18:19:36Z) - SORNet: Spatial Object-Centric Representations for Sequential
Manipulation [39.88239245446054]
Sequential manipulation tasks require a robot to perceive the state of an environment and plan a sequence of actions leading to a desired goal state.
We propose SORNet, which extracts object-centric representations from RGB images conditioned on canonical views of the objects of interest.
arXiv Detail & Related papers (2021-09-08T19:36:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.