Predicting Hidden Links and Missing Nodes in Scale-Free Networks with
Artificial Neural Networks
- URL: http://arxiv.org/abs/2109.12331v1
- Date: Sat, 25 Sep 2021 10:23:28 GMT
- Title: Predicting Hidden Links and Missing Nodes in Scale-Free Networks with
Artificial Neural Networks
- Authors: Rakib Hassan Pran, Ljupco Todorovski
- Abstract summary: We proposed a methodology in a form of an algorithm to predict hidden links and missing nodes in scale-free networks.
We used Bela Bollobas's directed scale-free random graph generation algorithm as a generator of random networks to generate a large set of scale-free network's data.
- Score: 1.0152838128195467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are many networks in real life which exist as form of Scale-free
networks such as World Wide Web, protein-protein inter action network, semantic
networks, airline networks, interbank payment networks, etc. If we want to
analyze these networks, it is really necessary to understand the properties of
scale-free networks. By using the properties of scale free networks, we can
identify any type of anomalies in those networks. In this research, we proposed
a methodology in a form of an algorithm to predict hidden links and missing
nodes in scale-free networks where we combined a generator of random networks
as a source of train data, on one hand, with artificial neural networks for
supervised classification, on the other, we aimed at training the neural
networks to discriminate between different subtypes of scale-free networks and
predicted the missing nodes and hidden links among (present and missing) nodes
in a given scale-free network. We chose Bela Bollobas's directed scale-free
random graph generation algorithm as a generator of random networks to generate
a large set of scale-free network's data.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - Understanding the network formation pattern for better link prediction [4.8334761517444855]
We propose a novel method named Link prediction using Multiple Order Local Information (MOLI)
MOLI exploits the local information from the neighbors of different distances, with parameters that can be a prior-driven based on prior knowledge.
We show that MOLI outperforms the other 11 widely used link prediction algorithms on 11 different types of simulated and real-world networks.
arXiv Detail & Related papers (2021-10-17T15:30:04Z) - A Probabilistic Approach to Neural Network Pruning [20.001091112545065]
We theoretically study the performance of two pruning techniques (random and magnitude-based) on FCNs and CNNs.
The results establish that there exist pruned networks with expressive power within any specified bound from the target network.
arXiv Detail & Related papers (2021-05-20T23:19:43Z) - Artificial Neural Networks generated by Low Discrepancy Sequences [59.51653996175648]
We generate artificial neural networks as random walks on a dense network graph.
Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards.
We demonstrate that the artificial neural networks generated by low discrepancy sequences can achieve an accuracy within reach of their dense counterparts at a much lower computational complexity.
arXiv Detail & Related papers (2021-03-05T08:45:43Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN [16.362525151483084]
We propose a novel dynamic heterogeneous network embedding method, termed as DyHATR.
It uses hierarchical attention to learn heterogeneous information and incorporates recurrent neural networks with temporal attention to capture evolutionary patterns.
We benchmark our method on four real-world datasets for the task of link prediction.
arXiv Detail & Related papers (2020-04-01T17:16:47Z) - ResiliNet: Failure-Resilient Inference in Distributed Neural Networks [56.255913459850674]
We introduce ResiliNet, a scheme for making inference in distributed neural networks resilient to physical node failures.
Failout simulates physical node failure conditions during training using dropout, and is specifically designed to improve the resiliency of distributed neural networks.
arXiv Detail & Related papers (2020-02-18T05:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.