Convex Chance-Constrained Stochastic Control under Uncertain Specifications with Application to Learning-Based Hybrid Powertrain Control
- URL: http://arxiv.org/abs/2601.18313v1
- Date: Mon, 26 Jan 2026 09:49:33 GMT
- Title: Convex Chance-Constrained Stochastic Control under Uncertain Specifications with Application to Learning-Based Hybrid Powertrain Control
- Authors: Teruki Kato, Ryotaro Shima, Kenji Kashima,
- Abstract summary: This paper presents a strictly convex chance-constrained control framework that accounts for uncertainty in control specifications.<n>The proposed method guarantees probabilistic constraint satisfaction while ensuring strict convexity, leading to uniqueness and continuity of the optimal solution.<n>The effectiveness of the proposed approach is demonstrated through model predictive control applied to a hybrid powertrain system.
- Score: 1.9116784879310027
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a strictly convex chance-constrained stochastic control framework that accounts for uncertainty in control specifications such as reference trajectories and operational constraints. By jointly optimizing control inputs and risk allocation under general (possibly non-Gaussian) uncertainties, the proposed method guarantees probabilistic constraint satisfaction while ensuring strict convexity, leading to uniqueness and continuity of the optimal solution. The formulation is further extended to nonlinear model-based control using exactly linearizable models identified through machine learning. The effectiveness of the proposed approach is demonstrated through model predictive control applied to a hybrid powertrain system.
Related papers
- Rectified Robust Policy Optimization for Model-Uncertain Constrained Reinforcement Learning without Strong Duality [53.525547349715595]
We propose a novel primal-only algorithm called Rectified Robust Policy Optimization (RRPO)<n>RRPO operates directly on the primal problem without relying on dual formulations.<n>We show convergence to an approximately optimal feasible policy with complexity matching the best-known lower bound.
arXiv Detail & Related papers (2025-08-24T16:59:38Z) - Distributionally Robust Policy and Lyapunov-Certificate Learning [13.38077406934971]
Key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adaptation to shifts in model parametric uncertainty during online deployment.
We tackle this with a novel distributionally robust formulation of the Lyapunov derivative chance constraint ensuring a monotonic decrease of the Lyapunov certificate.
We show that, for the resulting closed-loop system, the global stability of its equilibrium can be certified with high confidence, even with Out-of-Distribution uncertainties.
arXiv Detail & Related papers (2024-04-03T18:57:54Z) - Learning the Uncertainty Sets for Control Dynamics via Set Membership: A Non-Asymptotic Analysis [18.110158316883403]
This paper focuses on set membership estimation (SME) for unknown linear systems.
We provide the first convergence rate bounds for SME and discuss variations of SME under relaxed assumptions.
We also provide numerical results demonstrating SME's practical promise.
arXiv Detail & Related papers (2023-09-26T03:58:06Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Safe Chance Constrained Reinforcement Learning for Batch Process Control [0.0]
Reinforcement Learning (RL) controllers have generated excitement within the control community.
Recent focus on engineering applications has been directed towards the development of safe RL controllers.
arXiv Detail & Related papers (2021-04-23T16:48:46Z) - Adaptive Robust Model Predictive Control with Matched and Unmatched
Uncertainty [28.10549712956161]
We propose a learning-based robust predictive control algorithm that can handle large uncertainty in the dynamics for a class of discrete-time systems.
Motivated by an inability of existing learning-based predictive control algorithms to achieve safety guarantees in the presence of uncertainties of large magnitude, we achieve significant performance improvements.
arXiv Detail & Related papers (2021-04-16T17:47:02Z) - Combining Gaussian processes and polynomial chaos expansions for
stochastic nonlinear model predictive control [0.0]
We introduce a new algorithm to explicitly consider time-invariant uncertainties in optimal control problems.
The main novelty in this paper is to use this combination in an efficient fashion to obtain mean and variance estimates of nonlinear transformations.
It is shown how to formulate both chance-constraints and a probabilistic objective for the optimal control problem.
arXiv Detail & Related papers (2021-03-09T14:25:08Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Technical Report: Adaptive Control for Linearizable Systems Using
On-Policy Reinforcement Learning [41.24484153212002]
This paper proposes a framework for adaptively learning a feedback linearization-based tracking controller for an unknown system.
It does not require the learned inverse model to be invertible at all instances of time.
A simulated example of a double pendulum demonstrates the utility of the proposed theory.
arXiv Detail & Related papers (2020-04-06T15:50:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.