Efficient Learning of Decision-Making Models: A Penalty Block Coordinate
Descent Algorithm for Data-Driven Inverse Optimization
- URL: http://arxiv.org/abs/2210.15393v1
- Date: Thu, 27 Oct 2022 12:52:56 GMT
- Title: Efficient Learning of Decision-Making Models: A Penalty Block Coordinate
Descent Algorithm for Data-Driven Inverse Optimization
- Authors: Rishabh Gupta, Qi Zhang
- Abstract summary: We consider the inverse problem where we use prior decision data to uncover the underlying decision-making process.
This statistical learning problem is referred to as data-driven inverse optimization.
We propose an efficient block coordinate descent-based algorithm to solve large problem instances.
- Score: 12.610576072466895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision-making problems are commonly formulated as optimization problems,
which are then solved to make optimal decisions. In this work, we consider the
inverse problem where we use prior decision data to uncover the underlying
decision-making process in the form of a mathematical optimization model. This
statistical learning problem is referred to as data-driven inverse
optimization. We focus on problems where the underlying decision-making process
is modeled as a convex optimization problem whose parameters are unknown. We
formulate the inverse optimization problem as a bilevel program and propose an
efficient block coordinate descent-based algorithm to solve large problem
instances. Numerical experiments on synthetic datasets demonstrate the
computational advantage of our method compared to standard commercial solvers.
Moreover, the real-world utility of the proposed approach is highlighted
through two realistic case studies in which we consider estimating risk
preferences and learning local constraint parameters of agents in a multiplayer
Nash bargaining game.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification [5.031974232392534]
This work addresses data-driven inverse optimization (IO)
The goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal.
arXiv Detail & Related papers (2024-05-28T06:52:17Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Data-driven decision-focused surrogate modeling [10.1947610432159]
We introduce the concept of decision-focused surrogate modeling for solving challenging nonlinear optimization problems in real-time settings.
The proposed data-driven framework seeks to learn a simpler, e.g. convex, surrogate optimization model that is trained to minimize the decision prediction error.
We validate our framework through numerical experiments involving the optimization of common nonlinear chemical processes.
arXiv Detail & Related papers (2023-08-23T14:23:26Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Decomposition and Adaptive Sampling for Data-Driven Inverse Linear
Optimization [12.610576072466895]
This work addresses inverse linear optimization where the goal is to infer the unknown cost vector of a linear program.
We introduce a new formulation of the problem that, compared to other existing methods, allows the recovery of a less restrictive and generally more appropriate admissible set of cost estimates.
arXiv Detail & Related papers (2020-09-16T22:25:31Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Causal Bayesian Optimization [8.958125394444679]
We study the problem of globally optimizing a variable of interest that is part of a causal model in which a sequence of interventions can be performed.
Our approach combines ideas from causal inference, uncertainty quantification and sequential decision making.
We show how knowing the causal graph significantly improves the ability to reason about optimal decision making strategies.
arXiv Detail & Related papers (2020-05-24T13:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.