An Expandable Machine Learning-Optimization Framework to Sequential
Decision-Making
- URL: http://arxiv.org/abs/2311.06972v1
- Date: Sun, 12 Nov 2023 21:54:53 GMT
- Title: An Expandable Machine Learning-Optimization Framework to Sequential
Decision-Making
- Authors: Dogacan Yilmaz and \.I. Esra B\"uy\"uktahtak{\i}n
- Abstract summary: We present an integrated prediction-optimization (PredOpt) framework to efficiently solve sequential decision-making problems.
We address the key issues of sequential dependence, infeasibility, and generalization in machine learning (ML) to make predictions for optimal solutions to instances problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an integrated prediction-optimization (PredOpt) framework to
efficiently solve sequential decision-making problems by predicting the values
of binary decision variables in an optimal solution. We address the key issues
of sequential dependence, infeasibility, and generalization in machine learning
(ML) to make predictions for optimal solutions to combinatorial problems. The
sequential nature of the combinatorial optimization problems considered is
captured with recurrent neural networks and a sliding-attention window. We
integrate an attention-based encoder-decoder neural network architecture with
an infeasibility-elimination and generalization framework to learn high-quality
feasible solutions to time-dependent optimization problems. In this framework,
the required level of predictions is optimized to eliminate the infeasibility
of the ML predictions. These predictions are then fixed in mixed-integer
programming (MIP) problems to solve them quickly with the aid of a commercial
solver. We demonstrate our approach to tackling the two well-known dynamic
NP-Hard optimization problems: multi-item capacitated lot-sizing (MCLSP) and
multi-dimensional knapsack (MSMK). Our results show that models trained on
shorter and smaller-dimensional instances can be successfully used to predict
longer and larger-dimensional problems. The solution time can be reduced by
three orders of magnitude with an average optimality gap below 0.1%. We compare
PredOpt with various specially designed heuristics and show that our framework
outperforms them. PredOpt can be advantageous for solving dynamic MIP problems
that need to be solved instantly and repetitively.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Self-Supervised Learning of Iterative Solvers for Constrained Optimization [0.0]
We propose a learning-based iterative solver for constrained optimization.
It can obtain very fast and accurate solutions by customizing the solver to a specific parametric optimization problem.
A novel loss function based on the Karush-Kuhn-Tucker conditions of optimality is introduced, enabling fully self-supervised training of both neural networks.
arXiv Detail & Related papers (2024-09-12T14:17:23Z) - Deep learning enhanced mixed integer optimization: Learning to reduce model dimensionality [0.0]
This work introduces a framework to address the computational complexity inherent in Mixed-Integer Programming.
By employing deep learning, we construct problem-specific models that identify and exploit common structures across MIP instances.
We present an algorithm for generating synthetic data enhancing the robustness and generalizability of our models.
arXiv Detail & Related papers (2024-01-17T19:15:13Z) - Optimization and Optimizers for Adversarial Robustness [10.279287131070157]
In this paper, we introduce a novel framework that blends a general-purpose constrained-optimization solver with Constraint Folding.
Regarding reliability, PWCF provides solutions with stationarity measures and feasibility tests to assess the solution quality.
We further explore the distinct patterns in the solutions found for solving these problems using various combinations of losses, perturbation models, and optimization algorithms.
arXiv Detail & Related papers (2023-03-23T16:22:59Z) - MIP-GNN: A Data-Driven Framework for Guiding Combinatorial Solvers [13.790116387956703]
Mixed-integer programming (MIP) technology offers a generic way of formulating and solving optimization problems.
MIP-GNN is a general framework for enhancing such solvers with data-driven insights.
We integrate MIP-GNN into a state-of-the-art MIP solver, applying it to tasks such as node selection and warm-starting.
arXiv Detail & Related papers (2022-05-27T19:34:14Z) - An Online Prediction Approach Based on Incremental Support Vector
Machine for Dynamic Multiobjective Optimization [19.336520152294213]
We propose a novel prediction algorithm based on incremental support vector machine (ISVM)
We treat the solving of dynamic multiobjective optimization problems (DMOPs) as an online learning process.
The proposed algorithm can effectively tackle dynamic multiobjective optimization problems.
arXiv Detail & Related papers (2021-02-24T08:51:23Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.