Learning Diffusion Policies from Demonstrations For Compliant Contact-rich Manipulation
- URL: http://arxiv.org/abs/2410.19235v1
- Date: Fri, 25 Oct 2024 00:56:15 GMT
- Title: Learning Diffusion Policies from Demonstrations For Compliant Contact-rich Manipulation
- Authors: Malek Aburub, Cristian C. Beltran-Hernandez, Tatsuya Kamijo, Masashi Hamaya,
- Abstract summary: This paper introduces Diffusion Policies For Compliant Manipulation (DIPCOM), a novel diffusion-based framework for compliant control tasks.
By leveraging generative diffusion models, we develop a policy that predicts Cartesian end-effector poses and adjusts arm stiffness to maintain the necessary force.
Our approach enhances force control through multimodal distribution modeling, improves the integration of diffusion policies in compliance control, and extends our previous work by demonstrating its effectiveness in real-world tasks.
- Score: 5.1245307851495
- License:
- Abstract: Robots hold great promise for performing repetitive or hazardous tasks, but achieving human-like dexterity, especially in contact-rich and dynamic environments, remains challenging. Rigid robots, which rely on position or velocity control, often struggle with maintaining stable contact and applying consistent force in force-intensive tasks. Learning from Demonstration has emerged as a solution, but tasks requiring intricate maneuvers, such as powder grinding, present unique difficulties. This paper introduces Diffusion Policies For Compliant Manipulation (DIPCOM), a novel diffusion-based framework designed for compliant control tasks. By leveraging generative diffusion models, we develop a policy that predicts Cartesian end-effector poses and adjusts arm stiffness to maintain the necessary force. Our approach enhances force control through multimodal distribution modeling, improves the integration of diffusion policies in compliance control, and extends our previous work by demonstrating its effectiveness in real-world tasks. We present a detailed comparison between our framework and existing methods, highlighting the advantages and best practices for deploying diffusion-based compliance control.
Related papers
- Generative Predictive Control: Flow Matching Policies for Dynamic and Difficult-to-Demonstrate Tasks [11.780987653813792]
We introduce generative predictive control, a supervised learning framework for tasks with fast dynamics.
We show how trained flow-matching policies can be warm-started at run-time, maintaining temporal consistency and enabling fast feedback rates.
arXiv Detail & Related papers (2025-02-19T03:33:01Z) - COMBO-Grasp: Learning Constraint-Based Manipulation for Bimanual Occluded Grasping [56.907940167333656]
Occluded robot grasping is where the desired grasp poses are kinematically infeasible due to environmental constraints such as surface collisions.
Traditional robot manipulation approaches struggle with the complexity of non-prehensile or bimanual strategies commonly used by humans.
We introduce Constraint-based Manipulation for Bimanual Occluded Grasping (COMBO-Grasp), a learning-based approach which leverages two coordinated policies.
arXiv Detail & Related papers (2025-02-12T01:31:01Z) - CAIMAN: Causal Action Influence Detection for Sample Efficient Loco-manipulation [17.94272840532448]
We present CAIMAN, a novel framework for learning loco-manipulation that relies solely on sparse task rewards.
We employ a hierarchical control strategy, combining a low-level locomotion policy with a high-level policy that prioritizes task-relevant velocity commands.
We demonstrate the framework's superior sample efficiency, adaptability to diverse environments, and successful transfer to hardware without fine-tuning.
arXiv Detail & Related papers (2025-02-02T16:16:53Z) - Diffusion Predictive Control with Constraints [51.91057765703533]
Diffusion predictive control with constraints (DPCC)
An algorithm for diffusion-based control with explicit state and action constraints that can deviate from those in the training data.
We show through simulations of a robot manipulator that DPCC outperforms existing methods in satisfying novel test-time constraints while maintaining performance on the learned control task.
arXiv Detail & Related papers (2024-12-12T15:10:22Z) - Consistency Policy: Accelerated Visuomotor Policies via Consistency Distillation [31.534668378308822]
Consistency Policy is a faster and similarly powerful alternative to Diffusion Policy for learning visuomotor robot control.
By virtue of its fast inference speed, Consistency Policy can enable low latency decision making in resource-constrained robotic setups.
Key design decisions that enabled this performance are the choice of consistency objective, reduced initial sample variance, and the choice of preset chaining steps.
arXiv Detail & Related papers (2024-05-13T06:53:42Z) - Distributionally Adaptive Meta Reinforcement Learning [85.17284589483536]
We develop a framework for meta-RL algorithms that behave appropriately under test-time distribution shifts.
Our framework centers on an adaptive approach to distributional robustness that trains a population of meta-policies to be robust to varying levels of distribution shift.
We show how our framework allows for improved regret under distribution shift, and empirically show its efficacy on simulated robotics problems.
arXiv Detail & Related papers (2022-10-06T17:55:09Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Deep Reinforcement Learning for Contact-Rich Skills Using Compliant
Movement Primitives [0.0]
Further integration of industrial robots is hampered by their limited flexibility, adaptability and decision making skills.
We propose different pruning methods that facilitate convergence and generalization.
We demonstrate that the proposed method can learn insertion skills that are invariant to space, size, shape, and closely related scenarios.
arXiv Detail & Related papers (2020-08-30T17:29:43Z) - Efficient Empowerment Estimation for Unsupervised Stabilization [75.32013242448151]
empowerment principle enables unsupervised stabilization of dynamical systems at upright positions.
We propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel.
We show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images.
arXiv Detail & Related papers (2020-07-14T21:10:16Z) - Learning Compliance Adaptation in Contact-Rich Manipulation [81.40695846555955]
We propose a novel approach for learning predictive models of force profiles required for contact-rich tasks.
The approach combines an anomaly detection based on Bidirectional Gated Recurrent Units (Bi-GRU) and an adaptive force/impedance controller.
arXiv Detail & Related papers (2020-05-01T05:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.