Inverse Multiobjective Optimization Through Online Learning
- URL: http://arxiv.org/abs/2010.06140v1
- Date: Mon, 12 Oct 2020 17:53:49 GMT
- Title: Inverse Multiobjective Optimization Through Online Learning
- Authors: Chaosheng Dong, Bo Zeng
- Abstract summary: We study the problem of learning the objective functions or constraints of a multiobjective decision making model.
We develop two online learning algorithms with implicit update rules which can handle noisy data.
Numerical results show that both algorithms can learn the parameters with great accuracy and are robust to noise.
- Score: 14.366265951396587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning the objective functions or constraints of a
multiobjective decision making model, based on a set of sequentially arrived
decisions. In particular, these decisions might not be exact and possibly carry
measurement noise or are generated with the bounded rationality of decision
makers. In this paper, we propose a general online learning framework to deal
with this learning problem using inverse multiobjective optimization. More
precisely, we develop two online learning algorithms with implicit update rules
which can handle noisy data. Numerical results show that both algorithms can
learn the parameters with great accuracy and are robust to noise.
Related papers
- End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Efficient Learning of Decision-Making Models: A Penalty Block Coordinate
Descent Algorithm for Data-Driven Inverse Optimization [12.610576072466895]
We consider the inverse problem where we use prior decision data to uncover the underlying decision-making process.
This statistical learning problem is referred to as data-driven inverse optimization.
We propose an efficient block coordinate descent-based algorithm to solve large problem instances.
arXiv Detail & Related papers (2022-10-27T12:52:56Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Targeted Active Learning for Bayesian Decision-Making [15.491942513739676]
We argue that when acquiring samples sequentially, separating learning and decision-making is sub-optimal.
We introduce a novel active learning strategy which takes the down-the-line decision problem into account.
Specifically, we introduce a novel active learning criterion which maximizes the expected information gain on the posterior distribution of the optimal decision.
arXiv Detail & Related papers (2021-06-08T09:05:43Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.