A Framework for Inherently Interpretable Optimization Models
- URL: http://arxiv.org/abs/2208.12570v1
- Date: Fri, 26 Aug 2022 10:32:00 GMT
- Title: A Framework for Inherently Interpretable Optimization Models
- Authors: Marc Goerigk and Michael Hartisch
- Abstract summary: Solution of large-scale problems that seemed intractable decades ago are now a routine task.
One major barrier is that the optimization software can be perceived as a black box.
We propose an optimization framework to derive solutions that inherently come with an easily comprehensible explanatory rule.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With dramatic improvements in optimization software, the solution of
large-scale problems that seemed intractable decades ago are now a routine
task. This puts even more real-world applications into the reach of optimizers.
At the same time, solving optimization problems often turns out to be one of
the smaller difficulties when putting solutions into practice. One major
barrier is that the optimization software can be perceived as a black box,
which may produce solutions of high quality, but can create completely
different solutions when circumstances change leading to low acceptance of
optimized solutions. Such issues of interpretability and explainability have
seen significant attention in other areas, such as machine learning, but less
so in optimization. In this paper we propose an optimization framework to
derive solutions that inherently come with an easily comprehensible explanatory
rule, under which circumstances which solution should be chosen. Focussing on
decision trees to represent explanatory rules, we propose integer programming
formulations as well as a heuristic method that ensure applicability of our
approach even for large-scale problems. Computational experiments using random
and real-world data indicate that the costs of inherent interpretability can be
very small.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Learning Hard Optimization Problems: A Data Generation Perspective [44.4747903763245]
This paper demonstrates the volatility of the training data to the ability to approximate it.
It proposes a method for producing (exact or approximate) solutions to optimization problems that are amenable to supervised tasks.
arXiv Detail & Related papers (2021-06-04T17:03:44Z) - Learning to Optimize Under Constraints with Unsupervised Deep Neural
Networks [0.0]
We propose a machine learning (ML) method to learn how to solve a generic constrained continuous optimization problem.
In this paper, we propose an unsupervised deep learning (DL) solution for solving constrained optimization problems in real-time.
arXiv Detail & Related papers (2021-01-04T02:58:37Z) - Learning the Solution Manifold in Optimization and Its Application in
Motion Planning [4.177892889752434]
We learn manifold on the variable such as the variable such model represents an infinite set of solutions.
In our framework, we reduce problem estimation by using this importance.
We apply to motion-planning problems, which involve the optimization of high-dimensional parameters.
arXiv Detail & Related papers (2020-07-24T08:05:36Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Bio-inspired Optimization: metaheuristic algorithms for optimization [0.0]
In today's day and time solving real-world complex problems has become fundamentally vital and critical task.
Traditional optimization methods are found to be effective for small scale problems.
For real-world large scale problems, traditional methods either do not scale up or fail to obtain optimal solutions or they end-up giving solutions after a long running time.
arXiv Detail & Related papers (2020-02-24T13:26:34Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.