Integrated Optimization of Predictive and Prescriptive Tasks
- URL: http://arxiv.org/abs/2101.00354v1
- Date: Sat, 2 Jan 2021 02:43:10 GMT
- Title: Integrated Optimization of Predictive and Prescriptive Tasks
- Authors: Mehmet Kolcu, Alper E. Murat
- Abstract summary: We propose a new framework directly integrating predictive tasks under prescriptive tasks.
We train the parameters of predictive algorithm within a prescription problem via bilevel optimization techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In traditional machine learning techniques, the degree of closeness between
true and predicted values generally measures the quality of predictions.
However, these learning algorithms do not consider prescription problems where
the predicted values will be used as input to decision problems. In this paper,
we efficiently leverage feature variables, and we propose a new framework
directly integrating predictive tasks under prescriptive tasks in order to
prescribe consistent decisions. We train the parameters of predictive algorithm
within a prescription problem via bilevel optimization techniques. We present
the structure of our method and demonstrate its performance using synthetic
data compared to classical methods like point-estimate-based, stochastic
optimization and recently developed machine learning based optimization
methods. In addition, we control generalization error using different penalty
approaches and optimize the integration over validation data set.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Maximum Optimality Margin: A Unified Approach for Contextual Linear
Programming and Inverse Linear Programming [10.06803520598035]
We develop a new approach to the problem called maximum optimality margin which the machine learning loss function by the optimality condition of the downstream optimization.
arXiv Detail & Related papers (2023-01-26T17:53:38Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate
in Gradient Descent [20.47598828422897]
We propose textit-Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order descent methods.
Our approach modifies the objective function by adding a regularization term, and casts the joint process parameters.
arXiv Detail & Related papers (2021-04-12T13:13:34Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.