An island-parallel ensemble metaheuristic algorithm for large graph coloring problems
- URL: http://arxiv.org/abs/2504.15082v1
- Date: Mon, 21 Apr 2025 13:15:23 GMT
- Title: An island-parallel ensemble metaheuristic algorithm for large graph coloring problems
- Authors: Tansel Dokeroglu, Tayfun Kucukyilmaz, Ahmet Cosar,
- Abstract summary: We propose a new island-parallel ensemble metaheuristic algorithm (PEM-Color) to solve large GCP instances.<n>To the best of our knowledge, this is the first study that combines metaheuristics and applies to the GCP using an ensemble approach.
- Score: 0.4915744683251149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Coloring Problem (GCP) is an NP-Hard vertex labeling problem in graphs such that no two adjacent vertices can have the same color. Large instances of GCP cannot be solved in reasonable execution times by exact algorithms. Therefore, soft computing approaches, such as metaheuristics, have proven to be very efficient for solving large instances of GCP. In this study, we propose a new island-parallel ensemble metaheuristic algorithm (PEM-Color) to solve large GCP instances. Ensemble learning is a new machine learning approach based on combining the output of multiple models instead of using a single one. We use Message Passing Interface (MPI) parallel computation libraries to combine recent state-of-the-art metaheuristics: Harris Hawk Optimization (HHO), Artificial Bee Colony (ABC), and Teaching Learning Based (TLBO) to improve the quality of their solutions further. To the best of our knowledge, this is the first study that combines metaheuristics and applies to the GCP using an ensemble approach. We conducted experiments on large graph instances from the well-known DIMACS benchmark using 64 processors and achieved significant improvements in execution times. The experiments also indicate an almost linear speed-up with a strong scalability potential. The solution quality of the instances is promising, as our algorithm outperforms 13 state-of-the-art algorithms.
Related papers
- A Greedy Strategy for Graph Cut [95.2841574410968]
We propose a greedy strategy to solve the problem of Graph Cut, called GGC.<n>It starts from the state where each data sample is regarded as a cluster and dynamically merges the two clusters.<n>GGC has a nearly linear computational complexity with respect to the number of samples.
arXiv Detail & Related papers (2024-12-28T05:49:42Z) - Combinatorial Approximations for Cluster Deletion: Simpler, Faster, and Better [18.121514220195607]
Cluster deletion is an NP-hard graph clustering objective with applications in computational and social network analysis.
We first provide a tighter analysis of two previous approximation algorithms, improving their approximation guarantees from 4 to 3.
We show that both algorithms can be derandomized in a surprisingly simple way, by greedily taking a maximum degree in an auxiliary graph and forming a cluster around it.
arXiv Detail & Related papers (2024-04-24T18:39:18Z) - CMSA algorithm for solving the prioritized pairwise test data generation
problem in software product lines [1.1970409518725493]
In Software Product Lines (SPLs) it may be difficult or even impossible to test all the products of the family because of the large number of valid feature combinations that may exist.
In this work we propose a new approach based on a hybrid mepaireuristic approach called Construct, Merge, Solve & Adapt.
arXiv Detail & Related papers (2024-02-07T05:43:57Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - MPLP++: Fast, Parallel Dual Block-Coordinate Ascent for Dense Graphical
Models [96.1052289276254]
This work introduces a new MAP-solver, based on the popular Dual Block-Coordinate Ascent principle.
Surprisingly, by making a small change to the low-performing solver, we derive the new solver MPLP++ that significantly outperforms all existing solvers by a large margin.
arXiv Detail & Related papers (2020-04-16T16:20:53Z) - GLSearch: Maximum Common Subgraph Detection via Learning to Search [33.9052190473029]
We propose GLSearch, a Graph Neural Network (GNN) based learning to search model.
Our model is built upon the branch and bound bound, which selects one pair of nodes from the two input graphs to expand at a time.
Our GLSearch can be potentially extended to solve many other problems with constraints on graphs.
arXiv Detail & Related papers (2020-02-08T10:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.