Local Optima Correlation Assisted Adaptive Operator Selection
- URL: http://arxiv.org/abs/2305.02805v1
- Date: Wed, 3 May 2023 13:25:41 GMT
- Title: Local Optima Correlation Assisted Adaptive Operator Selection
- Authors: Jiyuan Pei, Hao Tong, Jialin Liu, Yi Mei, Xin Yao
- Abstract summary: We propose to empirically analyse the relationship between operators in terms of the correlation between their local optima.
Based on this newly proposed local optima correlation metric, we propose a novel approach for adaptively selecting among the operators during the search process.
- Score: 4.983846689106013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For solving combinatorial optimisation problems with metaheuristics,
different search operators are applied for sampling new solutions in the
neighbourhood of a given solution. It is important to understand the
relationship between operators for various purposes, e.g., adaptively deciding
when to use which operator to find optimal solutions efficiently. However, it
is difficult to theoretically analyse this relationship, especially in the
complex solution space of combinatorial optimisation problems. In this paper,
we propose to empirically analyse the relationship between operators in terms
of the correlation between their local optima and develop a measure for
quantifying their relationship. The comprehensive analyses on a wide range of
capacitated vehicle routing problem benchmark instances show that there is a
consistent pattern in the correlation between commonly used operators. Based on
this newly proposed local optima correlation metric, we propose a novel
approach for adaptively selecting among the operators during the search
process. The core intention is to improve search efficiency by preventing
wasting computational resources on exploring neighbourhoods where the local
optima have already been reached. Experiments on randomly generated instances
and commonly used benchmark datasets are conducted. Results show that the
proposed approach outperforms commonly used adaptive operator selection
methods.
Related papers
- Feature-Based Interpretable Surrogates for Optimization [0.8437187555622164]
In this work, we investigate how we can use more general optimization rules to increase interpretability.
The proposed rules do not map to a concrete solution but to a set of solutions characterized by common features.
In particular, we demonstrate the improvement in solution quality that our approach offers compared to existing interpretable surrogates for optimization.
arXiv Detail & Related papers (2024-09-03T13:12:49Z) - Modeling Local Search Metaheuristics Using Markov Decision Processes [0.0]
We introduce a theoretical framework based on Markov Decision Processes (MDP) for analyzing local search metaheuristics.
This framework not only helps in providing convergence results for individual algorithms, but also provides an explicit characterization of the exploration-exploitation tradeoff.
arXiv Detail & Related papers (2024-07-29T11:28:30Z) - Generalization Bounds of Surrogate Policies for Combinatorial Optimization Problems [61.580419063416734]
A recent stream of structured learning approaches has improved the practical state of the art for a range of optimization problems.
The key idea is to exploit the statistical distribution over instances instead of dealing with instances separately.
In this article, we investigate methods that smooth the risk by perturbing the policy, which eases optimization and improves the generalization error.
arXiv Detail & Related papers (2024-07-24T12:00:30Z) - Distributed Fractional Bayesian Learning for Adaptive Optimization [7.16937736207894]
This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter.
We aim to provide valuable insights for addressing parameter uncertainty in distributed optimization problems and simultaneously find the optimal solution.
arXiv Detail & Related papers (2024-04-17T13:09:33Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization
Problems [9.015720257837575]
We consider the regression between spaces, aiming to infer high-quality optimization solutions from samples of input-solution pairs.
For learning foundations, we present learning-error analysis under the PAC-Bayesian framework.
We obtain highly encouraging experimental results for several classic problems on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-07-15T17:59:08Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Multi-layer local optima networks for the analysis of advanced local
search-based algorithms [0.6299766708197881]
Local Optima Network (LON) is a graph model that compresses the fitness landscape of a particular optimization problem based on a specific neighborhood operator and a local search algorithm.
This paper proposes the concept of multi-layer LONs as well as a methodology to explore these models aiming at extracting metrics for fitness landscape analysis.
arXiv Detail & Related papers (2020-04-29T03:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.