Learning for Spatial Branching: An Algorithm Selection Approach
- URL: http://arxiv.org/abs/2204.10834v1
- Date: Fri, 22 Apr 2022 17:23:43 GMT
- Title: Learning for Spatial Branching: An Algorithm Selection Approach
- Authors: Bissan Ghaddar, Ignacio G\'omez-Casares, Julio Gonz\'alez-D\'iaz,
Brais Gonz\'alez-Rodr\'iguez, Beatriz Pateiro-L\'opez, Sof\'ia
Rodr\'iguez-Ballesteros
- Abstract summary: We develop a learning framework for branching and show its efficacy in the context of non-linear optimization problems.
The proposed learning is performed offline, based on instance-specific features and with no computational overhead when solving new instances.
Experiments on different benchmark instances from the show literature that the learning-based branching rule significantly outperforms the standard rules.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of machine learning techniques to improve the performance of
branch-and-bound optimization algorithms is a very active area in the context
of mixed integer linear problems, but little has been done for non-linear
optimization. To bridge this gap, we develop a learning framework for spatial
branching and show its efficacy in the context of the
Reformulation-Linearization Technique for polynomial optimization problems. The
proposed learning is performed offline, based on instance-specific features and
with no computational overhead when solving new instances. Novel graph-based
features are introduced, which turn out to play an important role for the
learning. Experiments on different benchmark instances from the literature show
that the learning-based branching rule significantly outperforms the standard
rules.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Efficient Methods for Non-stationary Online Learning [67.3300478545554]
We present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $mathcalO(log T)$ to $1$.
Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods.
arXiv Detail & Related papers (2023-09-16T07:30:12Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Maximum Optimality Margin: A Unified Approach for Contextual Linear
Programming and Inverse Linear Programming [10.06803520598035]
We develop a new approach to the problem called maximum optimality margin which the machine learning loss function by the optimality condition of the downstream optimization.
arXiv Detail & Related papers (2023-01-26T17:53:38Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - OASIS: An Active Framework for Set Inversion [4.014524824655106]
We introduce a novel method for solving the set inversion problem by formulating it as a binary classification problem.
We focus on active learning, a family of new and powerful techniques which can achieve the same level of accuracy with fewer data points compared to traditional learning methods.
arXiv Detail & Related papers (2021-05-31T15:04:43Z) - Linear Matrix Factorization Embeddings for Single-objective Optimization
Landscapes [0.755972004983746]
In black-box optimization, features have to be derived from a set of $(x,f(x))$ samples.
We show that a linear dimensionality reduction via matrix factorization significantly contributes towards a better detection of correlation between different problem instances.
arXiv Detail & Related papers (2020-09-30T08:46:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.