Safe Machine-Learning-supported Model Predictive Force and Motion
Control in Robotics
- URL: http://arxiv.org/abs/2303.04569v1
- Date: Wed, 8 Mar 2023 13:30:02 GMT
- Title: Safe Machine-Learning-supported Model Predictive Force and Motion
Control in Robotics
- Authors: Janine Matschek, Johanna Bethge, and Rolf Findeisen
- Abstract summary: Many robotic tasks, such as human-robot interactions or the handling of fragile objects, require tight control and limitation of appearing forces and moments alongside motion control to achieve safe yet high-performance operation.
We propose a learning-supported model predictive force and motion control scheme that provides safety guarantees while adapting to changing situations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many robotic tasks, such as human-robot interactions or the handling of
fragile objects, require tight control and limitation of appearing forces and
moments alongside sensible motion control to achieve safe yet high-performance
operation. We propose a learning-supported model predictive force and motion
control scheme that provides stochastic safety guarantees while adapting to
changing situations. Gaussian processes are used to learn the uncertain
relations that map the robot's states to the forces and moments. The model
predictive controller uses these Gaussian process models to achieve precise
motion and force control under stochastic constraint satisfaction. As the
uncertainty only occurs in the static model parts -- the output equations -- a
computationally efficient stochastic MPC formulation is used. Analysis of
recursive feasibility of the optimal control problem and convergence of the
closed loop system for the static uncertainty case are given. Chance constraint
formulation and back-offs are constructed based on the variance of the Gaussian
process to guarantee safe operation. The approach is illustrated on a
lightweight robot in simulations and experiments.
Related papers
- Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - Correct-by-Construction Control for Stochastic and Uncertain Dynamical
Models via Formal Abstractions [44.99833362998488]
We develop an abstraction framework that can be used to solve this problem under various modeling assumptions.
We use state-of-the-art verification techniques to compute an optimal policy on the iMDP with guarantees for satisfying the given specification.
We then show that, by construction, we can refine this policy into a feedback controller for which these guarantees carry over to the dynamical model.
arXiv Detail & Related papers (2023-11-16T11:03:54Z) - Tuning Legged Locomotion Controllers via Safe Bayesian Optimization [47.87675010450171]
This paper presents a data-driven strategy to streamline the deployment of model-based controllers in legged robotic hardware platforms.
We leverage a model-free safe learning algorithm to automate the tuning of control gains, addressing the mismatch between the simplified model used in the control formulation and the real system.
arXiv Detail & Related papers (2023-06-12T13:10:14Z) - Active Uncertainty Reduction for Safe and Efficient Interaction
Planning: A Shielding-Aware Dual Control Approach [9.07774184840379]
We present a novel algorithmic approach to enable active uncertainty reduction for interactive motion planning based on the implicit dual control paradigm.
Our approach relies on sampling-based approximation of dynamic programming, leading to a model predictive control problem that can be readily solved by real-time gradient-based optimization methods.
arXiv Detail & Related papers (2023-02-01T01:34:48Z) - Statistical Safety and Robustness Guarantees for Feedback Motion
Planning of Unknown Underactuated Stochastic Systems [1.0323063834827415]
We propose a sampling-based planner that uses the mean dynamics model and simultaneously bounds the closed-loop tracking error via a learned disturbance bound.
We validate that our guarantees translate to empirical safety in simulation on a 10D quadrotor, and in the real world on a physical CrazyFlie quadrotor and Clearpath Jackal robot.
arXiv Detail & Related papers (2022-12-13T19:38:39Z) - Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic
Dynamical Models with Epistemic Uncertainty [68.00748155945047]
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers.
Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability.
Our contribution is a novel abstraction-based controller method for continuous-state models with noise, uncertain parameters, and external disturbances.
arXiv Detail & Related papers (2022-10-12T07:57:03Z) - Adaptive Model Predictive Control by Learning Classifiers [26.052368583196426]
We propose an adaptive MPC variant that automatically estimates control and model parameters.
We leverage recent results showing that BO can be formulated as a density ratio estimation.
This is then integrated into a model predictive path integral control framework yielding robust controllers for a variety of challenging robotics tasks.
arXiv Detail & Related papers (2022-03-13T23:22:12Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.