Solving Multi-Structured Problems by Introducing Linkage Kernels into
GOMEA
- URL: http://arxiv.org/abs/2203.05970v1
- Date: Fri, 11 Mar 2022 14:48:40 GMT
- Title: Solving Multi-Structured Problems by Introducing Linkage Kernels into
GOMEA
- Authors: Arthur Guijt, Dirk Thierens, Tanja Alderliesten, Peter A.N. Bosman
- Abstract summary: We introduce linkage kernels, whereby a linkage structure is learned for each solution over its local neighborhood.
We also introduce a novel benchmark function called Best-of-Traps (BoT) that has an adjustable degree of different linkage structures.
On both BoT and a worst-case scenario-based variant of the well-known MaxCut problem, we experimentally find a vast performance improvement of linkage- kernel GOMEA over GOMEA.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Model-Based Evolutionary Algorithms (MBEAs) can be highly scalable by virtue
of linkage (or variable interaction) learning. This requires, however, that the
linkage model can capture the exploitable structure of a problem. Usually, a
single type of linkage structure is attempted to be captured using models such
as a linkage tree. However, in practice, problems may exhibit multiple linkage
structures. This is for instance the case in multi-objective optimization when
the objectives have different linkage structures. This cannot be modelled
sufficiently well when using linkage models that aim at capturing a single type
of linkage structure, deteriorating the advantages brought by MBEAs. Therefore,
here, we introduce linkage kernels, whereby a linkage structure is learned for
each solution over its local neighborhood. We implement linkage kernels into
the MBEA known as GOMEA that was previously found to be highly scalable when
solving various problems. We further introduce a novel benchmark function
called Best-of-Traps (BoT) that has an adjustable degree of different linkage
structures. On both BoT and a worst-case scenario-based variant of the
well-known MaxCut problem, we experimentally find a vast performance
improvement of linkage-kernel GOMEA over GOMEA with a single linkage tree as
well as the MBEA known as DSMGA-II.
Related papers
- GOAL: A Generalist Combinatorial Optimization Agent Learning [0.05461938536945722]
GOAL is a model capable of efficiently solving multiple hard optimization problems (COPs)
Goal consists of a single backbone plus light-weight problem-specific adapters for input and output processing.
We show that GOAL is only slightly inferior to the specialized baselines while being the first multi-task model that solves a wide range of COPs.
arXiv Detail & Related papers (2024-06-21T11:55:20Z) - Combinatorial Multivariant Multi-Armed Bandits with Applications to Episodic Reinforcement Learning and Beyond [58.39457881271146]
We introduce a novel framework of multi-armed bandits (CMAB) with multivariant and probabilistically triggering arms (CMAB-MT)
Compared with existing CMAB works, CMAB-MT not only enhances the modeling power but also allows improved results by leveraging distinct statistical properties for multivariant random variables.
Our framework can include many important problems as applications, such as episodic reinforcement learning (RL) and probabilistic maximum coverage for goods distribution.
arXiv Detail & Related papers (2024-06-03T14:48:53Z) - Real-Time Image Segmentation via Hybrid Convolutional-Transformer Architecture Search [49.81353382211113]
We address the challenge of integrating multi-head self-attention into high resolution representation CNNs efficiently.
We develop a multi-target multi-branch supernet method, which fully utilizes the advantages of high-resolution features.
We present a series of model via Hybrid Convolutional-Transformer Architecture Search (HyCTAS) method that searched for the best hybrid combination of light-weight convolution layers and memory-efficient self-attention layers.
arXiv Detail & Related papers (2024-03-15T15:47:54Z) - Fitness-based Linkage Learning and Maximum-Clique Conditional Linkage
Modelling for Gray-box Optimization with RV-GOMEA [0.552480439325792]
In this work, we combine fitness-based linkage learning and conditional linkage modelling in RV-GOMEA.
We find that the new RV-GOMEA not only performs best on most problems, also the overhead of learning the conditional linkage models during optimization is often negligible.
arXiv Detail & Related papers (2024-02-16T15:28:27Z) - GNBG: A Generalized and Configurable Benchmark Generator for Continuous
Numerical Optimization [5.635586285644365]
It is crucial to use a benchmark test suite that encompasses a diverse range of problem instances with various characteristics.
Traditional benchmark suites often consist of numerous fixed test functions, making it challenging to align these with specific research objectives.
This paper introduces the Generalized Numerical Benchmark Generator (GNBG) for single-objective, box-constrained, continuous numerical optimization.
arXiv Detail & Related papers (2023-12-12T09:04:34Z) - Modularity based linkage model for neuroevolution [4.9444321684311925]
Crossover between neural networks is considered disruptive due to the strong functional dependency between connection weights.
We propose a modularity-based linkage model at the weight level to preserve functionally dependent communities.
Our algorithm finds better, more functionally dependent linkage which leads to more successful crossover and better performance.
arXiv Detail & Related papers (2023-06-02T01:32:49Z) - DepGraph: Towards Any Structural Pruning [68.40343338847664]
We study general structural pruning of arbitrary architecture like CNNs, RNNs, GNNs and Transformers.
We propose a general and fully automatic method, emphDependency Graph (DepGraph), to explicitly model the dependency between layers and comprehensively group parameters for pruning.
In this work, we extensively evaluate our method on several architectures and tasks, including ResNe(X)t, DenseNet, MobileNet and Vision transformer for images, GAT for graph, DGCNN for 3D point cloud, alongside LSTM for language, and demonstrate that, even with a
arXiv Detail & Related papers (2023-01-30T14:02:33Z) - SWARM Parallelism: Training Large Models Can Be Surprisingly
Communication-Efficient [69.61083127540776]
Deep learning applications benefit from using large models with billions of parameters.
Training these models is notoriously expensive due to the need for specialized HPC clusters.
We consider alternative setups for training large models: using cheap "preemptible" instances or pooling existing resources from multiple regions.
arXiv Detail & Related papers (2023-01-27T18:55:19Z) - The Schr\"odinger Bridge between Gaussian Measures has a Closed Form [101.79851806388699]
We focus on the dynamic formulation of OT, also known as the Schr"odinger bridge (SB) problem.
In this paper, we provide closed-form expressions for SBs between Gaussian measures.
arXiv Detail & Related papers (2022-02-11T15:59:01Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z) - Consistency of Spectral Clustering on Hierarchical Stochastic Block
Models [5.983753938303726]
We study the hierarchy of communities in real-world networks under a generic block model.
We prove the strong consistency of this method under a wide range of model parameters.
Unlike most of existing work, our theory covers multiscale networks where the connection probabilities may differ by orders of magnitude.
arXiv Detail & Related papers (2020-04-30T01:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.