Fast Continuous and Integer L-shaped Heuristics Through Supervised
Learning
- URL: http://arxiv.org/abs/2205.00897v1
- Date: Mon, 2 May 2022 13:15:32 GMT
- Title: Fast Continuous and Integer L-shaped Heuristics Through Supervised
Learning
- Authors: Eric Larsen, Emma Frejinger, Bernard Gendron and Andrea Lodi
- Abstract summary: We propose a methodology to accelerate the solution of mixed-integer linear two-stage programs.
We aim at solving problems where the second stage is highly demanding.
Our core idea is to gain large reductions in online solution time while incurring small reductions in first-stage solution accuracy.
- Score: 4.521119623956821
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a methodology at the nexus of operations research and machine
learning (ML) leveraging generic approximators available from ML to accelerate
the solution of mixed-integer linear two-stage stochastic programs. We aim at
solving problems where the second stage is highly demanding. Our core idea is
to gain large reductions in online solution time while incurring small
reductions in first-stage solution accuracy by substituting the exact
second-stage solutions with fast, yet accurate supervised ML predictions. This
upfront investment in ML would be justified when similar problems are solved
repeatedly over time, for example, in transport planning related to fleet
management, routing and container yard management.
Our numerical results focus on the problem class seminally addressed with the
integer and continuous L-shaped cuts. Our extensive empirical analysis is
grounded in standardized families of problems derived from stochastic server
location (SSLP) and stochastic multi knapsack (SMKP) problems available in the
literature. The proposed method can solve the hardest instances of SSLP in less
than 9% of the time it takes the state-of-the-art exact method, and in the case
of SMKP the same figure is 20%. Average optimality gaps are in most cases less
than 0.1%.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Learning Optimal Solutions via an LSTM-Optimization Framework [0.0]
We present a deep learning-optimization framework to tackle dynamic mixed-integer programs.
We develop a bidirectional Long Short Term Memory (LSTM) framework that can process information forward and backward in time.
We demonstrate our approach in predicting the optimal decisions for the single-item capacitated lot-sizing problem.
arXiv Detail & Related papers (2022-07-06T19:38:01Z) - Minimizing Entropy to Discover Good Solutions to Recurrent Mixed Integer
Programs [0.0]
Current solvers for mixed-integer programming (MIP) problems are designed to perform well on a wide range of problems.
Recent works have shown that machine learning (ML) can be integrated with an MIP solver to inject domain knowledge and efficiently close the optimality gap.
This paper proposes an online solver that uses the notion of entropy to efficiently build a model with minimal training data and tuning.
arXiv Detail & Related papers (2022-02-07T18:52:56Z) - MILP for the Multi-objective VM Reassignment Problem [1.3649494534428743]
Multi-objective Machine Reassignment Problem is a challenging problem for constraint and mixed-integer programming approaches.
We show that a mixed-integer optimisation solver, such as IBM ILOG CPLEX, can be used for the Multi-objective Machine Reassignment Problem.
We show that it is useful only for small or medium-scale data centres and with some relaxations, such as an optimality tolerance gap and a limited number of directions explored in the search space.
arXiv Detail & Related papers (2021-03-18T17:46:57Z) - Learning to Schedule Heuristics in Branch-and-Bound [25.79025327341732]
Real-world applications typically require finding good solutions early in the search to enable fast decision-making.
We propose the first data-driven framework for schedulings in an exact MIP solver.
Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on a class of challenging instances.
arXiv Detail & Related papers (2021-03-18T14:49:52Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.