Encoding cloth manipulations using a graph of states and transitions
- URL: http://arxiv.org/abs/2009.14681v2
- Date: Thu, 3 Mar 2022 08:50:52 GMT
- Title: Encoding cloth manipulations using a graph of states and transitions
- Authors: J\'ulia Borr\`as, Guillem Aleny\`a and Carme Torras
- Abstract summary: We propose a generic, compact and simplified representation of the states of cloth manipulation.
We also define a Cloth Manipulation Graph that encodes all the strategies to accomplish a task.
- Score: 8.778914180886835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloth manipulation is very relevant for domestic robotic tasks, but it
presents many challenges due to the complexity of representing, recognizing and
predicting the behaviour of cloth under manipulation. In this work, we propose
a generic, compact and simplified representation of the states of cloth
manipulation that allows for representing tasks as sequences of states and
transitions. We also define a Cloth Manipulation Graph that encodes all the
strategies to accomplish a task. Our novel representation is used to encode two
different cloth manipulation tasks, learned from an experiment with human
subjects with video and motion data. We show how our simplified representation
allows to obtain a map of meaningful motion primitives.
Related papers
- ImageBrush: Learning Visual In-Context Instructions for Exemplar-Based
Image Manipulation [49.07254928141495]
We propose a novel manipulation methodology, dubbed ImageBrush, that learns visual instructions for more accurate image editing.
Our key idea is to employ a pair of transformation images as visual instructions, which precisely captures human intention.
Our model exhibits robust generalization capabilities on various downstream tasks such as pose transfer, image translation and video inpainting.
arXiv Detail & Related papers (2023-08-02T01:57:11Z) - Object Discovery from Motion-Guided Tokens [50.988525184497334]
We augment the auto-encoder representation learning framework with motion-guidance and mid-level feature tokenization.
Our approach enables the emergence of interpretable object-specific mid-level features.
arXiv Detail & Related papers (2023-03-27T19:14:00Z) - Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum [79.6027464700869]
We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
arXiv Detail & Related papers (2023-03-14T17:08:19Z) - Foldsformer: Learning Sequential Multi-Step Cloth Manipulation With
Space-Time Attention [4.2940878152791555]
We present a novel multi-step cloth manipulation planning framework named Foldformer.
We experimentally evaluate Foldsformer on four representative sequential multi-step manipulation tasks.
Our approach can be transferred from simulation to the real world without additional training or domain randomization.
arXiv Detail & Related papers (2023-01-08T09:15:45Z) - Learning Fabric Manipulation in the Real World with Human Videos [10.608723220309678]
Fabric manipulation is a long-standing challenge in robotics due to the enormous state space and complex dynamics.
Most prior methods rely heavily on simulation, which is still limited by the large sim-to-real gap of deformable objects.
A promising alternative is to learn fabric manipulation directly from watching humans perform the task.
arXiv Detail & Related papers (2022-11-05T07:09:15Z) - The dGLI Cloth Coordinates: A Topological Representation for Semantic
Classification of Cloth States [6.664736150040093]
We introduce dGLI Cloth Coordinates, a low-dimensional representation of the state of a rectangular piece of cloth.
Our representation is based on a directional derivative of the Gauss Linking Integral and allows us to represent both planar and spatial configurations.
arXiv Detail & Related papers (2022-09-14T15:16:45Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Semantic State Estimation in Cloth Manipulation Tasks [0.4812321790984493]
In this paper, we aim to solve the problem of semantic state estimation in cloth manipulation tasks.
We introduce a new large-scale fully-annotated RGB image dataset showing various human demonstrations of different complicated cloth manipulations.
We provide a set of baseline deep networks and benchmark them on the problem of semantic state estimation.
arXiv Detail & Related papers (2022-03-22T11:59:52Z) - Playful Interactions for Representation Learning [82.59215739257104]
We propose to use playful interactions in a self-supervised manner to learn visual representations for downstream tasks.
We collect 2 hours of playful data in 19 diverse environments and use self-predictive learning to extract visual representations.
Our representations generalize better than standard behavior cloning and can achieve similar performance with only half the number of required demonstrations.
arXiv Detail & Related papers (2021-07-19T17:54:48Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.