More Effective Randomized Search Heuristics for Graph Coloring Through
Dynamic Optimization
- URL: http://arxiv.org/abs/2005.13825v1
- Date: Thu, 28 May 2020 07:55:12 GMT
- Title: More Effective Randomized Search Heuristics for Graph Coloring Through
Dynamic Optimization
- Authors: Jakob Bossek, Frank Neumann, Pan Peng, Dirk Sudholt
- Abstract summary: We show that evolutionary algorithms can solve the graph coloring problem for bipartite graphs more efficiently by using dynamic optimization.
An island model using 3 islands succeeds in an optimal time of $Theta(m)$ on every $m$-edge bipartite graph, outperforming offspring populations.
- Score: 15.45783225341009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic optimization problems have gained significant attention in
evolutionary computation as evolutionary algorithms (EAs) can easily adapt to
changing environments. We show that EAs can solve the graph coloring problem
for bipartite graphs more efficiently by using dynamic optimization. In our
approach the graph instance is given incrementally such that the EA can
reoptimize its coloring when a new edge introduces a conflict. We show that,
when edges are inserted in a way that preserves graph connectivity, Randomized
Local Search (RLS) efficiently finds a proper 2-coloring for all bipartite
graphs. This includes graphs for which RLS and other EAs need exponential
expected time in a static optimization scenario. We investigate different ways
of building up the graph by popular graph traversals such as
breadth-first-search and depth-first-search and analyse the resulting runtime
behavior. We further show that offspring populations (e. g. a (1+$\lambda$)
RLS) lead to an exponential speedup in $\lambda$. Finally, an island model
using 3 islands succeeds in an optimal time of $\Theta(m)$ on every $m$-edge
bipartite graph, outperforming offspring populations. This is the first example
where an island model guarantees a speedup that is not bounded in the number of
islands.
Related papers
- Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Hierarchical cycle-tree packing model for $K$-core attack problem [0.0]
A hierarchical cycle-tree packing model is introduced here for this challenging optimization problem.
We analyze this model through the replica-symmetric cavity method of statistical physics.
The associated hierarchical cycle-tree guided attack (tt hCTGA) is able to construct nearly optimal attack solutions for regular random graphs.
arXiv Detail & Related papers (2023-03-02T06:47:33Z) - Condensing Graphs via One-Step Gradient Matching [50.07587238142548]
We propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights.
Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs.
In particular, we are able to reduce the dataset size by 90% while approximating up to 98% of the original performance.
arXiv Detail & Related papers (2022-06-15T18:20:01Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - Time Complexity Analysis of Randomized Search Heuristics for the Dynamic
Graph Coloring Problem [15.45783225341009]
We study the dynamic setting where edges are added to the current graph.
We then analyze the expected time for randomized searchs to recompute high quality solutions.
arXiv Detail & Related papers (2021-05-26T13:00:31Z) - Pseudoinverse Graph Convolutional Networks: Fast Filters Tailored for
Large Eigengaps of Dense Graphs and Hypergraphs [0.0]
Graph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised classification on graph-based datasets.
We propose a new GCN variant whose three-part filter space is targeted at dense graphs.
arXiv Detail & Related papers (2020-08-03T08:48:41Z) - Scalable Deep Generative Modeling for Sparse Graphs [105.60961114312686]
Existing deep neural methods require $Omega(n2)$ complexity by building up the adjacency matrix.
We develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix.
During training this autoregressive model can be parallelized with $O(log n)$ synchronization stages.
arXiv Detail & Related papers (2020-06-28T04:37:57Z) - Learning to Solve Combinatorial Optimization Problems on Real-World
Graphs in Linear Time [12.43303621344215]
We develop a new framework to solve any optimization problem over graphs without expert knowledge.
Our method trains a graph neural network using reinforcement learning on an unlabeled training set of graphs.
We demonstrate our approach on both NP-hard problems with optimality gaps close to 1, and show that our method is able to generalize well.
arXiv Detail & Related papers (2020-06-06T01:35:45Z) - The Quantum Approximate Optimization Algorithm Needs to See the Whole
Graph: Worst Case Examples [6.810856082577402]
The Quantum Approximate Optimization Algorithm can be applied to search problems on graphs with a cost function that is a sum of terms corresponding to the edges.
We show that the QAOA with $(d-1)2p nA$ for any $A1$, can only achieve an approximation ratio of 1/2 for Max-Cut on bipartite random d-regular graphs for d large.
arXiv Detail & Related papers (2020-05-18T14:23:09Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.