Machine Learning for Combinatorial Optimisation of Partially-Specified
Problems: Regret Minimisation as a Unifying Lens
- URL: http://arxiv.org/abs/2205.10157v1
- Date: Fri, 20 May 2022 13:06:29 GMT
- Title: Machine Learning for Combinatorial Optimisation of Partially-Specified
Problems: Regret Minimisation as a Unifying Lens
- Authors: Stefano Teso, Laurens Bliek, Andrea Borghesi, Michele Lombardi, Neil
Yorke-Smith, Tias Guns, Andrea Passerini
- Abstract summary: It is increasingly common to solve optimisation problems that are partially-specified.
The challenge is to learn them from available data, while taking into account a set of hard constraints.
This paper overviews four seemingly unrelated approaches, that can each be viewed as learning the objective function of a hard optimisation problem.
- Score: 34.87439325210671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is increasingly common to solve combinatorial optimisation problems that
are partially-specified. We survey the case where the objective function or the
relations between variables are not known or are only partially specified. The
challenge is to learn them from available data, while taking into account a set
of hard constraints that a solution must satisfy, and that solving the
optimisation problem (esp. during learning) is computationally very demanding.
This paper overviews four seemingly unrelated approaches, that can each be
viewed as learning the objective function of a hard combinatorial optimisation
problem: 1) surrogate-based optimisation, 2) empirical model learning, 3)
decision-focused learning (`predict + optimise'), and 4) structured-output
prediction. We formalise each learning paradigm, at first in the ways commonly
found in the literature, and then bring the formalisations together in a
compatible way using regret. We discuss the differences and interactions
between these frameworks, highlight the opportunities for cross-fertilization
and survey open directions.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Can Learned Optimization Make Reinforcement Learning Less Difficult? [70.5036361852812]
We consider whether learned optimization can help overcome reinforcement learning difficulties.
Our method, Learned Optimization for Plasticity, Exploration and Non-stationarity (OPEN), meta-learns an update rule whose input features and output structure are informed by previously proposed to these difficulties.
arXiv Detail & Related papers (2024-07-09T17:55:23Z) - Graph Reinforcement Learning for Combinatorial Optimization: A Survey and Unifying Perspective [6.199818486385127]
We use the trial-and-error paradigm of Reinforcement Learning for discovering better decision-making strategies.
This work focuses on non-canonical graph problems for which performant algorithms are typically not known.
arXiv Detail & Related papers (2024-04-09T17:45:25Z) - Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Generalization In Multi-Objective Machine Learning [27.806085423595334]
Multi-objective learning offers a natural framework for handling such problems without having to commit to early trade-offs.
statistical learning theory so far offers almost no insight into the generalization properties of multi-objective learning.
arXiv Detail & Related papers (2022-08-29T11:06:39Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization
Problems [9.015720257837575]
We consider the regression between spaces, aiming to infer high-quality optimization solutions from samples of input-solution pairs.
For learning foundations, we present learning-error analysis under the PAC-Bayesian framework.
We obtain highly encouraging experimental results for several classic problems on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-07-15T17:59:08Z) - Learning Hard Optimization Problems: A Data Generation Perspective [44.4747903763245]
This paper demonstrates the volatility of the training data to the ability to approximate it.
It proposes a method for producing (exact or approximate) solutions to optimization problems that are amenable to supervised tasks.
arXiv Detail & Related papers (2021-06-04T17:03:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.