Efficient End-to-End Learning for Decision-Making: A Meta-Optimization Approach
- URL: http://arxiv.org/abs/2505.11360v1
- Date: Fri, 16 May 2025 15:27:50 GMT
- Title: Efficient End-to-End Learning for Decision-Making: A Meta-Optimization Approach
- Authors: Rares Cristian, Pavithra Harsha, Georgia Perakis, Brian Quanz,
- Abstract summary: We present a meta-optimization method that learns efficient algorithms to approximate optimization problems.<n>We prove exponential convergence, approximation guarantees, and generalization bounds for our learning method.<n>This method offers superior computational efficiency, producing high-quality approximations faster and scaling better with problem size compared to existing techniques.
- Score: 5.84228364962637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: End-to-end learning has become a widely applicable and studied problem in training predictive ML models to be aware of their impact on downstream decision-making tasks. These end-to-end models often outperform traditional methods that separate training from the optimization and only myopically focus on prediction error. However, the computational complexity of end-to-end frameworks poses a significant challenge, particularly for large-scale problems. While training an ML model using gradient descent, each time we need to compute a gradient we must solve an expensive optimization problem. We present a meta-optimization method that learns efficient algorithms to approximate optimization problems, dramatically reducing computational overhead of solving the decision problem in general, an aspect we leverage in the training within the end-to-end framework. Our approach introduces a neural network architecture that near-optimally solves optimization problems while ensuring feasibility constraints through alternate projections. We prove exponential convergence, approximation guarantees, and generalization bounds for our learning method. This method offers superior computational efficiency, producing high-quality approximations faster and scaling better with problem size compared to existing techniques. Our approach applies to a wide range of optimization problems including deterministic, single-stage as well as two-stage stochastic optimization problems. We illustrate how our proposed method applies to (1) an electricity generation problem using real data from an electricity routing company coordinating the movement of electricity throughout 13 states, (2) a shortest path problem with a computer vision task of predicting edge costs from terrain maps, (3) a two-stage multi-warehouse cross-fulfillment newsvendor problem, as well as a variety of other newsvendor-like problems.
Related papers
- Solver-Free Decision-Focused Learning for Linear Optimization Problems [6.305123652677644]
In many real-world scenarios, the parameters of the optimization problem are not known a priori and must be predicted from contextual features.<n>This gives rise to predict-then-optimize problems, where a machine learning model predicts problem parameters that are then used to make decisions via optimization.<n>We propose a solver-free training method that exploits the geometric structure of linear optimization to enable efficient training with minimal degradation in solution quality.
arXiv Detail & Related papers (2025-05-28T10:55:16Z) - Self-Supervised Penalty-Based Learning for Robust Constrained Optimization [4.297070083645049]
We propose a new methodology for parameterized constrained robust optimization, based on learning with a self-supervised penalty-based loss function.<n>Our approach is able to effectively learn neural network approximations whose inference time is significantly smaller than the time of traditional solvers.
arXiv Detail & Related papers (2025-03-07T06:42:17Z) - Preference-Based Gradient Estimation for ML-Guided Approximate Combinatorial Optimization [15.102119312523696]
Combinatorial optimization (CO) problems arise across a broad spectrum of domains, including medicine, logistics, and manufacturing.<n>We propose a learning-based approach that enhances existing non-learned approximation algorithms for CO.<n>Our method is trained end-to-end in a self-supervised fashion, using a novel gradient estimation scheme that treats the approximation algorithm as a black box.
arXiv Detail & Related papers (2025-02-26T18:23:07Z) - Learning to Optimize for Mixed-Integer Non-linear Programming [20.469394148261838]
Mixed-integer nonlinear programs (MINLPs) arise in diverse domains such as energy systems and transportation.<n>MINLPs are notoriously difficult to solve, particularly on a large scale.<n>We propose a novel deep-learning approach with two learnable correction layers to ensure solution integrality and a post-processing step to improve solution feasibility.
arXiv Detail & Related papers (2024-10-14T20:14:39Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Contrastive Losses and Solution Caching for Predict-and-Optimize [19.31153168397003]
We use a Noise Contrastive approach to motivate a family of surrogate loss functions.
We address a major bottleneck of all predict-and-optimize approaches.
We show that even a very slow growth rate is enough to match the quality of state-of-the-art methods.
arXiv Detail & Related papers (2020-11-10T19:09:12Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.