Neur2SP: Neural Two-Stage Stochastic Programming
- URL: http://arxiv.org/abs/2205.12006v1
- Date: Fri, 20 May 2022 22:09:22 GMT
- Title: Neur2SP: Neural Two-Stage Stochastic Programming
- Authors: Justin Dumouchelle, Rahul Patel, Elias B. Khalil, Merve Bodur
- Abstract summary: We develop Neur2SP, a new method that approximates the expected value function via a neural network.
Neur2SP takes less than 1.66 seconds across all problems, achieving high-quality solutions even as the number of scenarios increases.
- Score: 3.5417030119735045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stochastic programming is a powerful modeling framework for decision-making
under uncertainty. In this work, we tackle two-stage stochastic programs
(2SPs), the most widely applied and studied class of stochastic programming
models. Solving 2SPs exactly requires evaluation of an expected value function
that is computationally intractable. Additionally, having a mixed-integer
linear program (MIP) or a nonlinear program (NLP) in the second stage further
aggravates the problem difficulty. In such cases, solving them can be
prohibitively expensive even if specialized algorithms that exploit problem
structure are employed. Finding high-quality (first-stage) solutions -- without
leveraging problem structure -- can be crucial in such settings. We develop
Neur2SP, a new method that approximates the expected value function via a
neural network to obtain a surrogate model that can be solved more efficiently
than the traditional extensive formulation approach. Moreover, Neur2SP makes no
assumptions about the problem structure, in particular about the second-stage
problem, and can be implemented using an off-the-shelf solver and open-source
libraries. Our extensive computational experiments on benchmark 2SP datasets
from four problem classes with different structures (containing MIP and NLP
second-stage problems) show the efficiency (time) and efficacy (solution
quality) of Neur2SP. Specifically, the proposed method takes less than 1.66
seconds across all problems, achieving high-quality solutions even as the
number of scenarios increases, an ideal property that is difficult to have for
traditional 2SP solution techniques. Namely, the most generic baseline method
typically requires minutes to hours to find solutions of comparable quality.
Related papers
- Optimized QUBO formulation methods for quantum computing [0.4999814847776097]
We show how to apply our techniques in case of an NP-hard optimization problem inspired by a real-world financial scenario.
We follow by submitting instances of this problem to two D-wave quantum annealers, comparing the performances of our novel approach with the standard methods used in these scenarios.
arXiv Detail & Related papers (2024-06-11T19:59:05Z) - Genetic-based Constraint Programming for Resource Constrained Job
Scheduling [5.068093754585243]
Resource constrained job scheduling is a hard computation optimisation problem that originates in the mining industry.
Off-the-shelf solutions cannot solve this problem satisfactorily in reasonable timeframes.
This paper proposes a genetic programming algorithm to discover efficient search strategies of constraint programming.
arXiv Detail & Related papers (2024-02-01T09:57:38Z) - Deep Learning for Two-Stage Robust Integer Optimization [15.876446950057389]
We propose Neur2RO, a deep-learning-augmented instantiation of the column-and-constraint-generation algorithm.
A custom-designed neural network is trained to estimate the optimal value and feasibility of the second-stage problem.
Neur2RO produces high-quality solutions quickly, outperforming state-of-the-art methods on two-stage knapsack, capital budgeting, and facility location problems.
arXiv Detail & Related papers (2023-10-06T16:11:46Z) - An Efficient Learning-Based Solver for Two-Stage DC Optimal Power Flow with Feasibility Guarantees [4.029937264494929]
This paper proposes a learning method to solve the two-stage problem in a more efficient and optimal way.
A technique called the gauge map is incorporated into the learning architecture design to guarantee the learned solutions' feasibility to the network constraints.
arXiv Detail & Related papers (2023-04-03T22:56:08Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - A Two-stage Framework and Reinforcement Learning-based Optimization
Algorithms for Complex Scheduling Problems [54.61091936472494]
We develop a two-stage framework, in which reinforcement learning (RL) and traditional operations research (OR) algorithms are combined together.
The scheduling problem is solved in two stages, including a finite Markov decision process (MDP) and a mixed-integer programming process, respectively.
Results show that the proposed algorithms could stably and efficiently obtain satisfactory scheduling schemes for agile Earth observation satellite scheduling problems.
arXiv Detail & Related papers (2021-03-10T03:16:12Z) - High Dimensional Level Set Estimation with Bayesian Neural Network [58.684954492439424]
This paper proposes novel methods to solve the high dimensional Level Set Estimation problems using Bayesian Neural Networks.
For each problem, we derive the corresponding theoretic information based acquisition function to sample the data points.
Numerical experiments on both synthetic and real-world datasets show that our proposed method can achieve better results compared to existing state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-17T23:21:53Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - An Online Method for A Class of Distributionally Robust Optimization
with Non-Convex Objectives [54.29001037565384]
We propose a practical online method for solving a class of online distributionally robust optimization (DRO) problems.
Our studies demonstrate important applications in machine learning for improving the robustness of networks.
arXiv Detail & Related papers (2020-06-17T20:19:25Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.