Trajectory Optimization of Chance-Constrained Nonlinear Stochastic
Systems for Motion Planning and Control
- URL: http://arxiv.org/abs/2106.02801v1
- Date: Sat, 5 Jun 2021 05:15:05 GMT
- Title: Trajectory Optimization of Chance-Constrained Nonlinear Stochastic
Systems for Motion Planning and Control
- Authors: Yashwanth Kumar Nakka and Soon-Jo Chung
- Abstract summary: We compute a sub-optimal solution for a continuous-time chance-constrained nonlinear optimal control problem (SNOC) problem.
The proposed method enables motion planning and control of robotic systems under uncertainty.
- Score: 9.35511513240868
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present gPC-SCP: Generalized Polynomial Chaos-based Sequential Convex
Programming method to compute a sub-optimal solution for a continuous-time
chance-constrained stochastic nonlinear optimal control problem (SNOC) problem.
The approach enables motion planning and control of robotic systems under
uncertainty. The proposed method involves two steps. The first step is to
derive a deterministic nonlinear optimal control problem (DNOC) with convex
constraints that are surrogate to the SNOC by using gPC expansion and the
distributionally-robust convex subset of the chance constraints. The second
step is to solve the DNOC problem using sequential convex programming (SCP) for
trajectory generation and control. We prove that in the unconstrained case, the
optimal value of the DNOC converges to that of SNOC asymptotically and that any
feasible solution of the constrained DNOC is a feasible solution of the
chance-constrained SNOC. We derive a stable stochastic model predictive
controller using the gPC-SCP for tracking a trajectory in the presence of
uncertainty. We empirically demonstrate the efficacy of the gPC-SCP method for
the following three test cases: 1) collision checking under uncertainty in
actuation, 2) collision checking with stochastic obstacle model, and 3) safe
trajectory tracking under uncertainty in the dynamics and obstacle location by
using a receding horizon control approach. We validate the effectiveness of the
gPC-SCP method on the robotic spacecraft testbed.
Related papers
- A Simulation-Free Deep Learning Approach to Stochastic Optimal Control [12.699529713351287]
We propose a simulation-free algorithm for the solution of generic problems in optimal control (SOC)
Unlike existing methods, our approach does not require the solution of an adjoint problem.
arXiv Detail & Related papers (2024-10-07T16:16:53Z) - Deterministic Policy Gradient Primal-Dual Methods for Continuous-Space Constrained MDPs [82.34567890576423]
We develop a deterministic policy gradient primal-dual method to find an optimal deterministic policy with non-asymptotic convergence.
We prove that the primal-dual iterates of D-PGPD converge at a sub-linear rate to an optimal regularized primal-dual pair.
To the best of our knowledge, this appears to be the first work that proposes a deterministic policy search method for continuous-space constrained MDPs.
arXiv Detail & Related papers (2024-08-19T14:11:04Z) - Dual Formulation for Chance Constrained Stochastic Shortest Path with
Application to Autonomous Vehicle Behavior Planning [3.655021726150368]
The Constrained Shortest Path problem (C-SSP) is a formalism for planning in environments under certain types of operating constraints.
This work's first contribution is an exact integer linear formulation for Chance-constrained policies.
Third, we show that the CC-SSP formalism can be generalized to account for constraints that span through multiple time steps.
arXiv Detail & Related papers (2023-02-25T16:40:00Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Learning Stochastic Parametric Differentiable Predictive Control
Policies [2.042924346801313]
We present a scalable alternative called parametric differentiable predictive control (SP-DPC) for unsupervised learning of neural control policies.
SP-DPC is formulated as a deterministic approximation to the parametric constrained optimal control problem.
We provide theoretical probabilistic guarantees for policies learned via the SP-DPC method on closed-loop constraints and chance satisfaction.
arXiv Detail & Related papers (2022-03-02T22:46:32Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Stochastic Finite State Control of POMDPs with LTL Specifications [14.163899014007647]
Partially observable Markov decision processes (POMDPs) provide a modeling framework for autonomous decision making under uncertainty.
This paper considers the quantitative problem of synthesizing sub-optimal finite state controllers (sFSCs) for POMDPs.
We propose a bounded policy algorithm, leading to a controlled growth in sFSC size and an any time algorithm, where the performance of the controller improves with successive iterations.
arXiv Detail & Related papers (2020-01-21T18:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.