Stochastic Optimal Control via Measure Relaxations
- URL: http://arxiv.org/abs/2508.00886v1
- Date: Sat, 26 Jul 2025 07:18:16 GMT
- Title: Stochastic Optimal Control via Measure Relaxations
- Authors: Etienne Buehrle, Christoph Stiller,
- Abstract summary: We cast the optimal control problem of a system as a convex problem over occupation measures.<n>We demonstrate our method on a set of synthetic and realworld scenarios.
- Score: 5.716199402595783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The optimal control problem of stochastic systems is commonly solved via robust or scenario-based optimization methods, which are both challenging to scale to long optimization horizons. We cast the optimal control problem of a stochastic system as a convex optimization problem over occupation measures. We demonstrate our method on a set of synthetic and real-world scenarios, learning cost functions from data via Christoffel polynomials. The code for our experiments is available at https://github.com/ebuehrle/dpoc.
Related papers
- Preference Optimization for Combinatorial Optimization Problems [54.87466279363487]
Reinforcement Learning (RL) has emerged as a powerful tool for neural optimization, enabling models learns that solve complex problems without requiring expert knowledge.<n>Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast action spaces.<n>We propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling.
arXiv Detail & Related papers (2025-05-13T16:47:00Z) - An Efficient On-Policy Deep Learning Framework for Stochastic Optimal Control [14.832859803172846]
We present a novel on-policy algorithm for solving optimal control (SOC) problems.<n>By leveraging the Girsanov theorem, our method directly computes on-policy gradients of the SOC objective without expensive backpropagation through differential equations or adjoint problem solutions.<n> Experimental results demonstrate substantial improvements in both computational speed and memory efficiency compared to existing approaches.
arXiv Detail & Related papers (2024-10-07T16:16:53Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Stochastic Optimal Control Matching [53.156277491861985]
Our work introduces Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for optimal control.
The control is learned via a least squares problem by trying to fit a matching vector field.
Experimentally, our algorithm achieves lower error than all the existing IDO techniques for optimal control.
arXiv Detail & Related papers (2023-12-04T16:49:43Z) - Numerical Methods for Convex Multistage Stochastic Optimization [86.45244607927732]
We focus on optimisation programming (SP), Optimal Control (SOC) and Decision Processes (MDP)
Recent progress in solving convex multistage Markov problems is based on cutting planes approximations of the cost-to-go functions of dynamic programming equations.
Cutting plane type methods can handle multistage problems with a large number of stages, but a relatively smaller number of state (decision) variables.
arXiv Detail & Related papers (2023-03-28T01:30:40Z) - Multistage Stochastic Optimization via Kernels [3.7565501074323224]
We develop a non-parametric, data-driven, tractable approach for solving multistage optimization problems.
We show that the proposed method produces decision rules with near-optimal average performance.
arXiv Detail & Related papers (2023-03-11T23:19:32Z) - A Nonstochastic Control Approach to Optimization [26.744354103012448]
We show how recent methods from control preconditions can overcome the challenge of convex nonity.
We can learn a method that attains a similar result in hindsight from a class of methods.
arXiv Detail & Related papers (2023-01-19T06:08:01Z) - Chance-Constrained Optimization in Contact-Rich Systems for Robust
Manipulation [13.687891070512828]
We present a chance-constrained optimization for Discrete-time Linear Complementarity Systems (SDLCS)
In our formulation, we explicitly consider joint chance constraints for complementarity as well as states to capture the evolution of dynamics.
The robustness proposed approach outperforms some recent approaches for robust trajectory optimization for SDLCS.
arXiv Detail & Related papers (2022-03-05T00:16:22Z) - Efficient Online Linear Control with Stochastic Convex Costs and Unknown
Dynamics [0.0]
We present a computationally efficient algorithm that attains an optimal $sqrtT$ regret-rate against the best stabilizing linear controller.
In contrast to previous work, our algorithm is based on the Optimism in the Face of Uncertainty paradigm.
arXiv Detail & Related papers (2022-03-02T15:19:20Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Tiering as a Stochastic Submodular Optimization Problem [5.659969270836789]
Tiering is an essential technique for building large-scale information retrieval systems.
We show that the optimal tiering as an optimization problem can be cast as a submodular minimization problem with a submodular knapsack constraint.
arXiv Detail & Related papers (2020-05-16T07:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.