Numerical Methods for Convex Multistage Stochastic Optimization
- URL: http://arxiv.org/abs/2303.15672v1
- Date: Tue, 28 Mar 2023 01:30:40 GMT
- Title: Numerical Methods for Convex Multistage Stochastic Optimization
- Authors: Guanghui Lan and Alexander Shapiro
- Abstract summary: We focus on optimisation programming (SP), Optimal Control (SOC) and Decision Processes (MDP)
Recent progress in solving convex multistage Markov problems is based on cutting planes approximations of the cost-to-go functions of dynamic programming equations.
Cutting plane type methods can handle multistage problems with a large number of stages, but a relatively smaller number of state (decision) variables.
- Score: 86.45244607927732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization problems involving sequential decisions in a stochastic
environment were studied in Stochastic Programming (SP), Stochastic Optimal
Control (SOC) and Markov Decision Processes (MDP). In this paper we mainly
concentrate on SP and SOC modelling approaches. In these frameworks there are
natural situations when the considered problems are convex. Classical approach
to sequential optimization is based on dynamic programming. It has the problem
of the so-called ``Curse of Dimensionality", in that its computational
complexity increases exponentially with increase of dimension of state
variables. Recent progress in solving convex multistage stochastic problems is
based on cutting planes approximations of the cost-to-go (value) functions of
dynamic programming equations. Cutting planes type algorithms in dynamical
settings is one of the main topics of this paper. We also discuss Stochastic
Approximation type methods applied to multistage stochastic optimization
problems. From the computational complexity point of view, these two types of
methods seem to be complimentary to each other. Cutting plane type methods can
handle multistage problems with a large number of stages, but a relatively
smaller number of state (decision) variables. On the other hand, stochastic
approximation type methods can only deal with a small number of stages, but a
large number of decision variables.
Related papers
- Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - The Parametric Cost Function Approximation: A new approach for
multistage stochastic programming [4.847980206213335]
We show that a parameterized version of a deterministic optimization model can be an effective way of handling uncertainty without the complexity of either programming or dynamic programming.
This approach can handle complex, high-dimensional state variables, and avoids the usual approximations associated with scenario trees or value function approximations.
arXiv Detail & Related papers (2022-01-01T23:25:09Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Nonequilibrium Monte Carlo for unfreezing variables in hard
combinatorial optimization [1.1783108699768]
We introduce a quantum-inspired family of nonlocal Nonequilibrium Monte Carlo (NMC) algorithms by developing an adaptive gradient-free strategy.
We observe significant speedup and robustness over both specialized solvers and generic solvers.
arXiv Detail & Related papers (2021-11-26T17:45:32Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax
Problems [26.24895953952318]
We develop an algorithm to solve a class of non-gence minimax problems.
They can also work with both single or two mini-batch derivatives.
arXiv Detail & Related papers (2020-06-27T03:05:18Z) - Bayesian optimization of variable-size design space problems [0.0]
Two alternative Bayesian Optimization-based approaches are proposed in order to solve this type of optimization problems.
The first approach consists in a budget allocation strategy allowing to focus the computational budget on the most promising design sub-spaces.
The second approach, instead, is based on the definition of a kernel function allowing to compute the covariance between samples characterized by partially different sets of variables.
arXiv Detail & Related papers (2020-03-06T16:30:44Z) - Complexity of Stochastic Dual Dynamic Programming [7.177693955272473]
We first establish the number of iteration, i.e., complexity, required by a basic dynamic cutting plane method for solving simple multi-stage optimization problems.
We then refine these basic tools and establish the iteration complexity for both deterministic and dual dynamic programming methods.
arXiv Detail & Related papers (2019-12-16T20:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.