Multi-layer local optima networks for the analysis of advanced local
search-based algorithms
- URL: http://arxiv.org/abs/2004.13936v1
- Date: Wed, 29 Apr 2020 03:20:01 GMT
- Title: Multi-layer local optima networks for the analysis of advanced local
search-based algorithms
- Authors: Marcella Scoczynski Ribeiro Martins, Mohamed El Yafrani, Myriam R. B.
S. Delgado, and Ricardo Luders
- Abstract summary: Local Optima Network (LON) is a graph model that compresses the fitness landscape of a particular optimization problem based on a specific neighborhood operator and a local search algorithm.
This paper proposes the concept of multi-layer LONs as well as a methodology to explore these models aiming at extracting metrics for fitness landscape analysis.
- Score: 0.6299766708197881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A Local Optima Network (LON) is a graph model that compresses the fitness
landscape of a particular combinatorial optimization problem based on a
specific neighborhood operator and a local search algorithm. Determining which
and how landscape features affect the effectiveness of search algorithms is
relevant for both predicting their performance and improving the design
process. This paper proposes the concept of multi-layer LONs as well as a
methodology to explore these models aiming at extracting metrics for fitness
landscape analysis. Constructing such models, extracting and analyzing their
metrics are the preliminary steps into the direction of extending the study on
single neighborhood operator heuristics to more sophisticated ones that use
multiple operators. Therefore, in the present paper we investigate a twolayer
LON obtained from instances of a combinatorial problem using bitflip and swap
operators. First, we enumerate instances of NK-landscape model and use the hill
climbing heuristic to build the corresponding LONs. Then, using LON metrics, we
analyze how efficiently the search might be when combining both strategies. The
experiments show promising results and demonstrate the ability of multi-layer
LONs to provide useful information that could be used for in metaheuristics
based on multiple operators such as Variable Neighborhood Search.
Related papers
- Modeling Local Search Metaheuristics Using Markov Decision Processes [0.0]
We introduce a theoretical framework based on Markov Decision Processes (MDP) for analyzing local search metaheuristics.
This framework not only helps in providing convergence results for individual algorithms, but also provides an explicit characterization of the exploration-exploitation tradeoff.
arXiv Detail & Related papers (2024-07-29T11:28:30Z) - Local Optima Correlation Assisted Adaptive Operator Selection [4.983846689106013]
We propose to empirically analyse the relationship between operators in terms of the correlation between their local optima.
Based on this newly proposed local optima correlation metric, we propose a novel approach for adaptively selecting among the operators during the search process.
arXiv Detail & Related papers (2023-05-03T13:25:41Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - An algorithmic framework for the optimization of deep neural networks architectures and hyperparameters [0.23301643766310373]
We propose an algorithmic framework to automatically generate efficient deep neural networks.
The framework is based on evolving directed acyclic graphs (DAGs)
It allows mixtures of different classical operations: convolutions, recurrences and dense layers, but also more newfangled operations such as self-attention.
arXiv Detail & Related papers (2023-02-27T08:00:33Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Redefining Neural Architecture Search of Heterogeneous Multi-Network
Models by Characterizing Variation Operators and Model Components [71.03032589756434]
We investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models.
We characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
arXiv Detail & Related papers (2021-06-16T17:12:26Z) - A Forward Backward Greedy approach for Sparse Multiscale Learning [0.0]
We propose a feature driven Reproducing Kernel Hilbert space (RKHS) for which the associated kernel has a weighted multiscale structure.
For generating approximations in this space, we provide a practical forward-backward algorithm that is shown to greedily construct a set of basis functions having a multiscale structure.
We analyze the performance of the approach on a variety of simulation and real data sets.
arXiv Detail & Related papers (2021-02-14T04:22:52Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus [62.86856923633923]
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
In contrast to previous works, which resorted to hand-crafted search strategies for multiple model detection, we learn the search strategy from data.
For self-supervised learning of the search, we evaluate the proposed algorithm on multi-homography estimation and demonstrate an accuracy that is superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.