Mixed-Integer Optimization with Constraint Learning
- URL: http://arxiv.org/abs/2111.04469v3
- Date: Thu, 26 Oct 2023 19:50:48 GMT
- Title: Mixed-Integer Optimization with Constraint Learning
- Authors: Donato Maragno, Holly Wiberg, Dimitris Bertsimas, S. Ilker Birbil,
Dick den Hertog, Adejuyigbe Fajemisin
- Abstract summary: We establish a broad methodological foundation for mixed-integer optimization with learned constraints.
We exploit the mixed-integer optimization-representability of many machine learning methods.
We demonstrate the method in both World Food Programme planning and chemotherapy optimization.
- Score: 4.462264781248437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We establish a broad methodological foundation for mixed-integer optimization
with learned constraints. We propose an end-to-end pipeline for data-driven
decision making in which constraints and objectives are directly learned from
data using machine learning, and the trained models are embedded in an
optimization formulation. We exploit the mixed-integer
optimization-representability of many machine learning methods, including
linear models, decision trees, ensembles, and multi-layer perceptrons, which
allows us to capture various underlying relationships between decisions,
contextual variables, and outcomes. We also introduce two approaches for
handling the inherent uncertainty of learning from data. First, we characterize
a decision trust region using the convex hull of the observations, to ensure
credible recommendations and avoid extrapolation. We efficiently incorporate
this representation using column generation and propose a more flexible
formulation to deal with low-density regions and high-dimensional datasets.
Then, we propose an ensemble learning approach that enforces constraint
satisfaction over multiple bootstrapped estimators or multiple algorithms. In
combination with domain-driven components, the embedded models and trust region
define a mixed-integer optimization problem for prescription generation. We
implement this framework as a Python package (OptiCL) for practitioners. We
demonstrate the method in both World Food Programme planning and chemotherapy
optimization. The case studies illustrate the framework's ability to generate
high-quality prescriptions as well as the value added by the trust region, the
use of ensembles to control model robustness, the consideration of multiple
machine learning methods, and the inclusion of multiple learned constraints.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Robust Data-driven Prescriptiveness Optimization [4.792851066169871]
This paper introduces a distributionally robust contextual optimization model where the coefficient of prescriptiveness substitutes for the classical empirical risk objective minimization.
We evaluate the robustness of the resulting policies against alternative methods when the out-of-sample dataset is subject to varying amounts of distribution shift.
arXiv Detail & Related papers (2023-06-09T14:56:06Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Multi-Task Learning for Sparsity Pattern Heterogeneity: Statistical and Computational Perspectives [10.514866749547558]
We consider a problem in Multi-Task Learning (MTL) where multiple linear models are jointly trained on a collection of datasets.
A key novelty of our framework is that it allows the sparsity pattern of regression coefficients and the values of non-zero coefficients to differ across tasks.
Our methods encourage models to share information across tasks through separately encouraging 1) coefficient supports, and/or 2) nonzero coefficient values to be similar.
This allows models to borrow strength during variable selection even when non-zero coefficient values differ across tasks.
arXiv Detail & Related papers (2022-12-16T19:52:25Z) - Context-Aware Ensemble Learning for Time Series [11.716677452529114]
We introduce a new approach using a meta learner that effectively combines the base model predictions via using a superset of the features that is the union of the base models' feature vectors instead of the predictions themselves.
Our model does not use the predictions of the base models as inputs to a machine learning algorithm, but choose the best possible combination at each time step based on the state of the problem.
arXiv Detail & Related papers (2022-11-30T10:36:13Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Learning Distributionally Robust Models at Scale via Composite
Optimization [45.47760229170775]
We show how different variants of DRO are simply instances of a finite-sum composite optimization for which we provide scalable methods.
We also provide empirical results that demonstrate the effectiveness of our proposed algorithm with respect to the prior art in order to learn robust models from very large datasets.
arXiv Detail & Related papers (2022-03-17T20:47:42Z) - Deep Learning with Multiple Data Set: A Weighted Goal Programming
Approach [2.7393821783237184]
Large-scale data analysis is growing at an exponential rate as data proliferates in our societies.
Deep Learning models require plenty of resources, and distributed training is needed.
This paper presents a Multicriteria approach for distributed learning.
arXiv Detail & Related papers (2021-11-27T07:10:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.