Constrained Dynamic Movement Primitives for Safe Learning of Motor
Skills
- URL: http://arxiv.org/abs/2209.14461v1
- Date: Wed, 28 Sep 2022 22:59:33 GMT
- Title: Constrained Dynamic Movement Primitives for Safe Learning of Motor
Skills
- Authors: Seiji Shaw, Devesh K. Jha, Arvind Raghunathan, Radu Corcodel, Diego
Romeres, George Konidaris and Daniel Nikovski
- Abstract summary: We present constrained dynamic movement primitives (CDMP) which can allow for constraint satisfaction in the robot workspace.
A video showing the implementation of the proposed algorithm using different manipulators in different environments could be found here.
- Score: 25.06692536893836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic movement primitives are widely used for learning skills which can be
demonstrated to a robot by a skilled human or controller. While their
generalization capabilities and simple formulation make them very appealing to
use, they possess no strong guarantees to satisfy operational safety
constraints for a task. In this paper, we present constrained dynamic movement
primitives (CDMP) which can allow for constraint satisfaction in the robot
workspace. We present a formulation of a non-linear optimization to perturb the
DMP forcing weights regressed by locally-weighted regression to admit a Zeroing
Barrier Function (ZBF), which certifies workspace constraint satisfaction. We
demonstrate the proposed CDMP under different constraints on the end-effector
movement such as obstacle avoidance and workspace constraints on a physical
robot. A video showing the implementation of the proposed algorithm using
different manipulators in different environments could be found here
https://youtu.be/hJegJJkJfys.
Related papers
- Programmable Motion Generation for Open-Set Motion Control Tasks [51.73738359209987]
We introduce a new paradigm, programmable motion generation.
In this paradigm, any given motion control task is broken down into a combination of atomic constraints.
These constraints are then programmed into an error function that quantifies the degree to which a motion sequence adheres to them.
arXiv Detail & Related papers (2024-05-29T17:14:55Z) - Safe Machine-Learning-supported Model Predictive Force and Motion
Control in Robotics [0.0]
Many robotic tasks, such as human-robot interactions or the handling of fragile objects, require tight control and limitation of appearing forces and moments alongside motion control to achieve safe yet high-performance operation.
We propose a learning-supported model predictive force and motion control scheme that provides safety guarantees while adapting to changing situations.
arXiv Detail & Related papers (2023-03-08T13:30:02Z) - Safe Imitation Learning of Nonlinear Model Predictive Control for Flexible Robots [6.501150406218775]
We propose a framework for a safe approximation of model predictive control (NMPC) using imitation learning and a predictive safety filter.
Compared to NMPC, our framework shows more than an eightfold improvement in computation time when controlling a three-dimensional flexible robot arm in simulation.
The development of fast and safe approximate NMPC holds the potential to accelerate the adoption of flexible robots in industry.
arXiv Detail & Related papers (2022-12-06T12:54:08Z) - Differentiable Constrained Imitation Learning for Robot Motion Planning
and Control [0.26999000177990923]
We develop a framework for constraint robotic motion planning and control, as well as traffic agent simulation.
We focus on mobile robot and automated driving applications.
Simulated experiments of mobile robot navigation and automated driving provide evidence for the performance of the proposed method.
arXiv Detail & Related papers (2022-10-21T08:19:45Z) - Regularized Deep Signed Distance Fields for Reactive Motion Generation [30.792481441975585]
Distance-based constraints are fundamental for enabling robots to plan their actions and act safely.
We propose Regularized Deep Signed Distance Fields (ReDSDF), a single neural implicit function that can compute smooth distance fields at any scale.
We demonstrate the effectiveness of our approach in representative simulated tasks for whole-body control (WBC) and safe Human-Robot Interaction (HRI) in shared workspaces.
arXiv Detail & Related papers (2022-03-09T14:21:32Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - Large Scale Distributed Collaborative Unlabeled Motion Planning with
Graph Policy Gradients [122.85280150421175]
We present a learning method to solve the unlabelled motion problem with motion constraints and space constraints in 2D space for a large number of robots.
We employ a graph neural network (GNN) to parameterize policies for the robots.
arXiv Detail & Related papers (2021-02-11T21:57:43Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning [85.13138591433635]
The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints.
In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques.
arXiv Detail & Related papers (2020-04-15T18:15:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.