Improving Critical Node Detection Using Neural Network-based
Initialization in a Genetic Algorithm
- URL: http://arxiv.org/abs/2402.00404v1
- Date: Thu, 1 Feb 2024 07:53:25 GMT
- Title: Improving Critical Node Detection Using Neural Network-based
Initialization in a Genetic Algorithm
- Authors: Chanjuan Liu, Shike Ge, Zhihan Chen, Wenbin Pei, Enqiang Zhu, Yi Mei,
Hisao Ishibuchi
- Abstract summary: The primary objective of CNP-1a is to minimize the pair-wise connectivity in the remaining network after deleting a limited number of nodes from a network.
To improve the efficiency of solving CNP-1a, a knowledge-guided genetic algorithm named K2GA has been proposed.
The effectiveness of the proposed knowledgeguided genetic algorithm is verified by experiments on 26 realworld instances of complex networks.
- Score: 3.5877750256097465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Critical Node Problem (CNP) is concerned with identifying the critical
nodes in a complex network. These nodes play a significant role in maintaining
the connectivity of the network, and removing them can negatively impact
network performance. CNP has been studied extensively due to its numerous
real-world applications. Among the different versions of CNP, CNP-1a has gained
the most popularity. The primary objective of CNP-1a is to minimize the
pair-wise connectivity in the remaining network after deleting a limited number
of nodes from a network. Due to the NP-hard nature of CNP-1a, many
heuristic/metaheuristic algorithms have been proposed to solve this problem.
However, most existing algorithms start with a random initialization, leading
to a high cost of obtaining an optimal solution. To improve the efficiency of
solving CNP-1a, a knowledge-guided genetic algorithm named K2GA has been
proposed. Unlike the standard genetic algorithm framework, K2GA has two main
components: a pretrained neural network to obtain prior knowledge on possible
critical nodes, and a hybrid genetic algorithm with local search for finding an
optimal set of critical nodes based on the knowledge given by the trained
neural network. The local search process utilizes a cut node-based greedy
strategy. The effectiveness of the proposed knowledgeguided genetic algorithm
is verified by experiments on 26 realworld instances of complex networks.
Experimental results show that K2GA outperforms the state-of-the-art algorithms
regarding the best, median, and average objective values, and improves the best
upper bounds on the best objective values for eight realworld instances.
Related papers
- Neural Algorithmic Reasoning for Combinatorial Optimisation [20.36694807847833]
We propose leveraging recent advancements in neural reasoning to improve the learning of CO problems.
We suggest pre-training our neural model on relevant algorithms before training it on CO instances.
Our results demonstrate that by using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.
arXiv Detail & Related papers (2023-05-18T13:59:02Z) - A Comprehensively Improved Hybrid Algorithm for Learning Bayesian
Networks: Multiple Compound Memory Erasing [0.0]
This paper presents a new hybrid algorithm, MCME (multiple compound memory erasing)
MCME retains the advantages of the first two methods, solves the shortcomings of the above CI tests, and makes innovations in the scoring function in the direction discrimination stage.
A large number of experiments show that MCME has better or similar performance than some existing algorithms.
arXiv Detail & Related papers (2022-12-05T12:52:07Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Identifying critical nodes in complex networks by graph representation
learning [2.304938062591095]
Influence is one of the main problems in critical nodes mining.
IMGNN is a graph learning framework that takes centralities of nodes in a network as input and the probability that nodes in the optimal initial spreaders as output.
IMGNN is more efficient than human-based algorithms in minimizing the size of initial spreaders under the fixed infection scale.
arXiv Detail & Related papers (2022-01-20T03:41:22Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Finite Versus Infinite Neural Networks: an Empirical Study [69.07049353209463]
kernel methods outperform fully-connected finite-width networks.
Centered and ensembled finite networks have reduced posterior variance.
Weight decay and the use of a large learning rate break the correspondence between finite and infinite networks.
arXiv Detail & Related papers (2020-07-31T01:57:47Z) - Neural Bipartite Matching [19.600193617583955]
This report describes how neural execution is applied to a complex algorithm.
It is achieved via neural execution based only on features generated from a single GNN.
The evaluation shows strongly generalising results with the network achieving optimal matching almost 100% of the time.
arXiv Detail & Related papers (2020-05-22T17:50:38Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.