QDP: Learning to Sequentially Optimise Quasi-Static and Dynamic
Manipulation Primitives for Robotic Cloth Manipulation
- URL: http://arxiv.org/abs/2303.13320v2
- Date: Sat, 14 Oct 2023 09:19:00 GMT
- Title: QDP: Learning to Sequentially Optimise Quasi-Static and Dynamic
Manipulation Primitives for Robotic Cloth Manipulation
- Authors: David Blanco-Mulero, Gokhan Alcan, Fares J. Abu-Dakka, Ville Kyrki
- Abstract summary: Quasi-Dynamic isable (QDP) method optimises parameters such as the motion velocity.
We leverage the framework of Sequential Reinforcement Learning to decouple the parameters that compose the primitives.
- Score: 9.469635938429645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-defined manipulation primitives are widely used for cloth manipulation.
However, cloth properties such as its stiffness or density can highly impact
the performance of these primitives. Although existing solutions have tackled
the parameterisation of pick and place locations, the effect of factors such as
the velocity or trajectory of quasi-static and dynamic manipulation primitives
has been neglected. Choosing appropriate values for these parameters is crucial
to cope with the range of materials present in house-hold cloth objects. To
address this challenge, we introduce the Quasi-Dynamic Parameterisable (QDP)
method, which optimises parameters such as the motion velocity in addition to
the pick and place positions of quasi-static and dynamic manipulation
primitives. In this work, we leverage the framework of Sequential Reinforcement
Learning to decouple sequentially the parameters that compose the primitives.
To evaluate the effectiveness of the method we focus on the task of cloth
unfolding with a robotic arm in simulation and real-world experiments. Our
results in simulation show that by deciding the optimal parameters for the
primitives the performance can improve by 20% compared to sub-optimal ones.
Real-world results demonstrate the advantage of modifying the velocity and
height of manipulation primitives for cloths with different mass, stiffness,
shape and size. Supplementary material, videos, and code, can be found at
https://sites.google.com/view/qdp-srl.
Related papers
- ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation [58.615616224739654]
Conventional robotic manipulation methods usually learn semantic representation of the observation for prediction.
We propose a dynamic Gaussian Splatting method named ManiGaussian for multi-temporal robotic manipulation.
Our framework can outperform the state-of-the-art methods by 13.1% in average success rate.
arXiv Detail & Related papers (2024-03-13T08:06:41Z) - Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis [51.14136878142034]
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models.
Existing methods for model adaptation usually update all model parameters, which is inefficient as it relies on high computational costs.
In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency.
arXiv Detail & Related papers (2024-03-03T08:25:04Z) - Implicit Neural Representation for Physics-driven Actuated Soft Bodies [15.261578025057593]
This paper utilizes a differentiable, quasi-static, and physics-based simulation layer to optimize for actuation signals parameterized by neural networks.
We define a function that enables a continuous mapping from a spatial point in the material space to the actuation value.
We extend our implicit model to mandible kinematics for the particular case of facial animation and show that we can reliably reproduce facial expressions captured with high-quality capture systems.
arXiv Detail & Related papers (2024-01-26T13:42:12Z) - Dynamic-Resolution Model Learning for Object Pile Manipulation [33.05246884209322]
We investigate how to learn dynamic and adaptive representations at different levels of abstraction to achieve the optimal trade-off between efficiency and effectiveness.
Specifically, we construct dynamic-resolution particle representations of the environment and learn a unified dynamics model using graph neural networks (GNNs)
We show that our method achieves significantly better performance than state-of-the-art fixed-resolution baselines at the gathering, sorting, and redistribution of granular object piles.
arXiv Detail & Related papers (2023-06-29T05:51:44Z) - Robust Pivoting Manipulation using Contact Implicit Bilevel Optimization [17.741546783400484]
Generalizable manipulation requires robots to interact with novel objects and environment.
We study robust optimization for planning of pivoting manipulation in the presence of uncertainties.
We present insights about how friction can be exploited to compensate for inaccuracies in the estimates of the physical properties during manipulation.
arXiv Detail & Related papers (2023-03-15T22:25:34Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions [65.84090965167535]
We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
arXiv Detail & Related papers (2022-06-29T18:47:05Z) - Robust Value Iteration for Continuous Control Tasks [99.00362538261972]
When transferring a control policy from simulation to a physical system, the policy needs to be robust to variations in the dynamics to perform well.
We present Robust Fitted Value Iteration, which uses dynamic programming to compute the optimal value function on the compact state domain.
We show that robust value is more robust compared to deep reinforcement learning algorithm and the non-robust version of the algorithm.
arXiv Detail & Related papers (2021-05-25T19:48:35Z) - Contextual Latent-Movements Off-Policy Optimization for Robotic
Manipulation Skills [41.140532647789456]
We propose a novel view on handling the demonstrated trajectories for acquiring low-dimensional, non-linear latent dynamics.
We introduce a new contextual off-policy RL algorithm, named LAtent-Movements Policy Optimization (LAMPO)
LAMPO provides sample-efficient policies against common approaches in literature.
arXiv Detail & Related papers (2020-10-26T17:53:30Z) - Dynamic Scale Training for Object Detection [111.33112051962514]
We propose a Dynamic Scale Training paradigm (abbreviated as DST) to mitigate scale variation challenge in object detection.
Experimental results demonstrate the efficacy of our proposed DST towards scale variation handling.
It does not introduce inference overhead and could serve as a free lunch for general detection configurations.
arXiv Detail & Related papers (2020-04-26T16:48:17Z) - Dimensionality Reduction of Movement Primitives in Parameter Space [34.16700176918835]
Movement primitives are an important policy class for real-world robotics.
The high dimensionality of their parametrization makes the policy optimization expensive both in terms of samples and computation.
We propose the application of dimensionality reduction in the parameter space, identifying principal movements.
arXiv Detail & Related papers (2020-02-26T16:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.