Fitness-based Linkage Learning and Maximum-Clique Conditional Linkage
Modelling for Gray-box Optimization with RV-GOMEA
- URL: http://arxiv.org/abs/2402.10757v1
- Date: Fri, 16 Feb 2024 15:28:27 GMT
- Title: Fitness-based Linkage Learning and Maximum-Clique Conditional Linkage
Modelling for Gray-box Optimization with RV-GOMEA
- Authors: Georgios Andreadis, Tanja Alderliesten, Peter A.N. Bosman
- Abstract summary: In this work, we combine fitness-based linkage learning and conditional linkage modelling in RV-GOMEA.
We find that the new RV-GOMEA not only performs best on most problems, also the overhead of learning the conditional linkage models during optimization is often negligible.
- Score: 0.552480439325792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For many real-world optimization problems it is possible to perform partial
evaluations, meaning that the impact of changing a few variables on a
solution's fitness can be computed very efficiently. It has been shown that
such partial evaluations can be excellently leveraged by the Real-Valued GOMEA
(RV-GOMEA) that uses a linkage model to capture dependencies between problem
variables. Recently, conditional linkage models were introduced for RV-GOMEA,
expanding its state-of-the-art performance even to problems with overlapping
dependencies. However, that work assumed that the dependency structure is known
a priori. Fitness-based linkage learning techniques have previously been used
to detect dependencies during optimization, but only for non-conditional
linkage models. In this work, we combine fitness-based linkage learning and
conditional linkage modelling in RV-GOMEA. In addition, we propose a new way to
model overlapping dependencies in conditional linkage models to maximize the
joint sampling of fully interdependent groups of variables. We compare the
resulting novel variant of RV-GOMEA to other variants of RV-GOMEA and VkD-CMA
on 12 problems with varying degree of overlapping dependencies. We find that
the new RV-GOMEA not only performs best on most problems, also the overhead of
learning the conditional linkage models during optimization is often
negligible.
Related papers
- Modularity based linkage model for neuroevolution [4.9444321684311925]
Crossover between neural networks is considered disruptive due to the strong functional dependency between connection weights.
We propose a modularity-based linkage model at the weight level to preserve functionally dependent communities.
Our algorithm finds better, more functionally dependent linkage which leads to more successful crossover and better performance.
arXiv Detail & Related papers (2023-06-02T01:32:49Z) - Joint Graph Learning and Model Fitting in Laplacian Regularized
Stratified Models [5.933030735757292]
Laplacian regularized stratified models (LRSM) are models that utilize the explicit or implicit network structure of the sub-problems.
This paper shows the importance and sensitivity of graph weights in LRSM, and provably show that the sensitivity can be arbitrarily large.
We propose a generic approach to jointly learn the graph while fitting the model parameters by solving a single optimization problem.
arXiv Detail & Related papers (2023-05-04T06:06:29Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Multi-Target XGBoostLSS Regression [91.3755431537592]
We present an extension of XGBoostLSS that models multiple targets and their dependencies in a probabilistic regression setting.
Our approach outperforms existing GBMs with respect to runtime and compares well in terms of accuracy.
arXiv Detail & Related papers (2022-10-13T08:26:14Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Solving Multi-Structured Problems by Introducing Linkage Kernels into
GOMEA [0.0]
We introduce linkage kernels, whereby a linkage structure is learned for each solution over its local neighborhood.
We also introduce a novel benchmark function called Best-of-Traps (BoT) that has an adjustable degree of different linkage structures.
On both BoT and a worst-case scenario-based variant of the well-known MaxCut problem, we experimentally find a vast performance improvement of linkage- kernel GOMEA over GOMEA.
arXiv Detail & Related papers (2022-03-11T14:48:40Z) - Parameterless Gene-pool Optimal Mixing Evolutionary Algorithms [0.0]
We present the latest version of, and propose substantial enhancements to, the Gene-pool Optimal Mixing Evoutionary Algorithm (GOMEA)
We show that GOMEA and CGOMEA significantly outperform the original GOMEA and DSMGA-II on most problems.
arXiv Detail & Related papers (2021-09-11T11:35:14Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Variational Model-based Policy Optimization [34.80171122943031]
Model-based reinforcement learning (RL) algorithms allow us to combine model-generated data with those collected from interaction with the real system in order to alleviate the data efficiency problem in RL.
We propose an objective function as a variational lower-bound of a log-likelihood of a log-likelihood to jointly learn and improve model and policy.
Our experiments on a number of continuous control tasks show that despite being more complex, our model-based (E-step) algorithm, called emactoral model-based policy optimization (VMBPO), is more sample-efficient and
arXiv Detail & Related papers (2020-06-09T18:30:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.