Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling
- URL: http://arxiv.org/abs/2105.01853v1
- Date: Wed, 5 May 2021 03:36:00 GMT
- Title: Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling
- Authors: An Liu, Rui Yang, Tony Q. S. Quek and Min-Jian Zhao
- Abstract summary: Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
- Score: 86.85697555068168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a two-stage stochastic optimization problem, in which a long-term
optimization variable is coupled with a set of short-term optimization
variables in both objective and constraint functions. Despite that two-stage
stochastic optimization plays a critical role in various engineering and
scientific applications, there still lack efficient algorithms, especially when
the long-term and short-term variables are coupled in the constraints. To
overcome the challenge caused by tightly coupled stochastic constraints, we
first establish a two-stage primal-dual decomposition (PDD) method to decompose
the two-stage problem into a long-term problem and a family of short-term
subproblems. Then we propose a PDD-based stochastic successive convex
approximation (PDD-SSCA) algorithmic framework to find KKT solutions for
two-stage stochastic optimization problems. At each iteration, PDD-SSCA first
runs a short-term sub-algorithm to find stationary points of the short-term
subproblems associated with a mini-batch of the state samples. Then it
constructs a convex surrogate for the long-term problem based on the deep
unrolling of the short-term sub-algorithm and the back propagation method.
Finally, the optimal solution of the convex surrogate problem is solved to
generate the next iterate. We establish the almost sure convergence of PDD-SSCA
and customize the algorithmic framework to solve two important application
problems. Simulations show that PDD-SSCA can achieve superior performance over
existing solutions.
Related papers
- Analysis of the Non-variational Quantum Walk-based Optimisation Algorithm [0.0]
This paper introduces in detail a non-variational quantum algorithm designed to solve a wide range of optimisation problems.
The algorithm returns optimal and near-optimal solutions from repeated preparation and measurement of an amplified state.
arXiv Detail & Related papers (2024-07-29T13:54:28Z) - Online Non-convex Optimization with Long-term Non-convex Constraints [2.033434950296318]
A novel Follow-the-Perturbed-Leader type algorithm is proposed and analyzed for solving general long-term constrained optimization problems in online manner.
The proposed algorithm is applied to tackle a long-term (extreme value) constrained river pollutant source identification problem.
arXiv Detail & Related papers (2023-11-04T15:08:36Z) - Quantum Annealing Solutions for the Closest String Problem with D-Wave
Systems [0.0]
Closest String Problem is an NP-complete problem which appears more commonly in bioinformatics and coding theory.
Two QUBO formulations have been proposed, with one being a slight modification over the other.
DWave annealers have been used, while providing guidelines for optimality on certain platform-specific concerns.
arXiv Detail & Related papers (2023-10-19T16:03:25Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Zeroth and First Order Stochastic Frank-Wolfe Algorithms for Constrained
Optimization [13.170519806372075]
Problems of convex optimization with two sets of constraints arise frequently in the context of semidefinite programming.
Since projection onto the first set of constraints is difficult, it becomes necessary to explore projection-free algorithms.
The efficacy of the proposed algorithms is tested on relevant applications of sparse matrix estimation, clustering via semidefinite relaxation, and uniform sparsest cut problem.
arXiv Detail & Related papers (2021-07-14T08:01:30Z) - STRIDE along Spectrahedral Vertices for Solving Large-Scale Rank-One
Semidefinite Relaxations [27.353023427198806]
We consider solving high-order semidefinite programming relaxations of nonconstrained optimization problems (POPs)
Existing approaches, which solve the SDP independently from the POP, either cannot scale to large problems or suffer from slow convergence due to the typical uneneracy of such SDPs.
We propose a new algorithmic framework called SpecTrahedral vErtices (STRIDE)
arXiv Detail & Related papers (2021-05-28T18:07:16Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.