Edge-set reduction to efficiently solve the graph partitioning problem
with the genetic algorithm
- URL: http://arxiv.org/abs/2307.10410v1
- Date: Wed, 19 Jul 2023 18:39:15 GMT
- Title: Edge-set reduction to efficiently solve the graph partitioning problem
with the genetic algorithm
- Authors: Ali Chaouche and Menouar Boulif
- Abstract summary: We investigate the impact of modifying the size of the chromosomes on the edge based genetic algorithms (GA)
We show that for big dense instances, the size of the encoding representation becomes too huge and affects GA's efficiency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The graph partitioning problem (GPP) is among the most challenging models in
optimization. Because of its NP-hardness, the researchers directed their
interest towards approximate methods such as the genetic algorithms (GA). The
edge-based GA has shown promising results when solving GPP. However, for big
dense instances, the size of the encoding representation becomes too huge and
affects GA's efficiency. In this paper, we investigate the impact of modifying
the size of the chromosomes on the edge based GA by reducing the GPP edge set.
We study the GA performance with different levels of reductions, and we report
the obtained results.
Related papers
- Graph Adversarial Diffusion Convolution [49.974206213411904]
This paper introduces a min-max optimization formulation for the Graph Signal Denoising (GSD) problem.
We derive a new variant of the Graph Diffusion Convolution architecture, called Graph Adversarial Diffusion Convolution (GADC)
arXiv Detail & Related papers (2024-06-04T07:43:04Z) - Rethinking and Accelerating Graph Condensation: A Training-Free Approach with Class Partition [56.26113670151363]
Graph condensation is a data-centric solution to replace the large graph with a small yet informative condensed graph.
Existing GC methods suffer from intricate optimization processes, necessitating excessive computing resources.
We propose a training-free GC framework termed Class-partitioned Graph Condensation (CGC)
CGC achieves state-of-the-art performance with a more efficient condensation process.
arXiv Detail & Related papers (2024-05-22T14:57:09Z) - GARA: A novel approach to Improve Genetic Algorithms' Accuracy and Efficiency by Utilizing Relationships among Genes [1.7226572355808027]
We propose Gene Regulatory Genetic Algorithm (GRGA), which is the first time to utilize relationships among genes for improving GA's accuracy and efficiency.
We use a directed multipartite graph encapsulating the solution space, called RGGR, where each node corresponds to a gene in the solution and the edge represents the relationship between adjacent nodes.
The obtained RGGR is then employed to determine appropriate loci of crossover and mutation operators, thereby directing the evolutionary process toward faster and better convergence.
arXiv Detail & Related papers (2024-04-28T08:33:39Z) - Improving genetic algorithms performance via deterministic population
shrinkage [9.334663477968027]
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing scheme on the performance of Genetic Algorithms (GAs)
It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter.
Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.
arXiv Detail & Related papers (2024-01-22T17:05:16Z) - Exploring Sparsity in Graph Transformers [67.48149404841925]
Graph Transformers (GTs) have achieved impressive results on various graph-related tasks.
However, the huge computational cost of GTs hinders their deployment and application, especially in resource-constrained environments.
We propose a comprehensive textbfGraph textbfTransformer textbfSParsification (GTSP) framework that helps to reduce the computational complexity of GTs.
arXiv Detail & Related papers (2023-12-09T06:21:44Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Understanding Gradient Regularization in Deep Learning: Efficient
Finite-Difference Computation and Implicit Bias [15.739122088062793]
Gradient regularization (GR) is a method that penalizes the norm of the training loss during training.
We show that a specific finite-difference computation, composed of both gradient ascent and descent steps, reduces the computational cost for GR.
We demonstrate that finite-difference GR is closely related to some other algorithms based on iterative ascent and descent steps for exploring flat minima.
arXiv Detail & Related papers (2022-10-06T07:12:54Z) - Scalable Adversarial Attack on Graph Neural Networks with Alternating
Direction Method of Multipliers [17.09807200410981]
We propose SAG, the first scalable adversarial attack method with Alternating Direction Method of Multipliers (ADMM)
We show that SAG can significantly reduce the computation and memory overhead compared with the state-of-the-art approach.
arXiv Detail & Related papers (2020-09-22T00:33:36Z) - Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
via Gaussian Processes [144.6048446370369]
Graph convolutional neural networks(GCNs) have recently demonstrated promising results on graph-based semi-supervised classification.
We propose a GP regression model via GCNs(GPGC) for graph-based semi-supervised learning.
We conduct extensive experiments to evaluate GPGC and demonstrate that it outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-02-26T10:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.