On the Power of Gradual Network Alignment Using Dual-Perception
Similarities
- URL: http://arxiv.org/abs/2201.10945v3
- Date: Thu, 17 Aug 2023 06:03:10 GMT
- Title: On the Power of Gradual Network Alignment Using Dual-Perception
Similarities
- Authors: Jin-Duk Park, Cong Tran, Won-Yong Shin, Xin Cao
- Abstract summary: Network alignment (NA) is the task of finding the correspondence of nodes between two networks based on the network structure and node attributes.
Our study is motivated by the fact that, since most of existing NA methods have attempted to discover all node pairs at once, they do not harness information enriched through interim discovery of node correspondences.
We propose Grad-Align, a new NA method that gradually discovers node pairs by making full use of node pairs exhibiting strong consistency.
- Score: 14.779474659172923
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network alignment (NA) is the task of finding the correspondence of nodes
between two networks based on the network structure and node attributes. Our
study is motivated by the fact that, since most of existing NA methods have
attempted to discover all node pairs at once, they do not harness information
enriched through interim discovery of node correspondences to more accurately
find the next correspondences during the node matching. To tackle this
challenge, we propose Grad-Align, a new NA method that gradually discovers node
pairs by making full use of node pairs exhibiting strong consistency, which are
easy to be discovered in the early stage of gradual matching. Specifically,
Grad-Align first generates node embeddings of the two networks based on graph
neural networks along with our layer-wise reconstruction loss, a loss built
upon capturing the first-order and higher-order neighborhood structures. Then,
nodes are gradually aligned by computing dual-perception similarity measures
including the multi-layer embedding similarity as well as the Tversky
similarity, an asymmetric set similarity using the Tversky index applicable to
networks with different scales. Additionally, we incorporate an edge
augmentation module into Grad-Align to reinforce the structural consistency.
Through comprehensive experiments using real-world and synthetic datasets, we
empirically demonstrate that Grad-Align consistently outperforms
state-of-the-art NA methods.
Related papers
- Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Collaborative Graph Neural Networks for Attributed Network Embedding [63.39495932900291]
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
We propose COllaborative graph Neural Networks--CONN, a tailored GNN architecture for network embedding.
arXiv Detail & Related papers (2023-07-22T04:52:27Z) - NODDLE: Node2vec based deep learning model for link prediction [0.0]
We propose NODDLE (integration of NOde2vec anD Deep Learning mEthod), a deep learning model which incorporates the features extracted by node2vec and feeds them into a hidden neural network.
Experimental results show that this method yields better results than the traditional methods on various social network datasets.
arXiv Detail & Related papers (2023-05-25T18:43:52Z) - Node Feature Augmentation Vitaminizes Network Alignment [13.52901288497192]
Network alignment (NA) is the task of discovering node correspondences across multiple networks.
We propose Grad-Align+, a novel NA method built upon a recent state-of-the-art NA method, the so-called Grad-Align.
Grad-Align+ consists of three key components: 1)-based node feature augmentation (CNFA), 2) graph neural network (GNN)-aided embedding similarity calculation alongside the augmented node features, and 3) gradual NA with similarity calculation using aligned cross-network neighbor-pairs (ACNs)
arXiv Detail & Related papers (2023-04-25T11:59:19Z) - Improved Convergence Guarantees for Shallow Neural Networks [91.3755431537592]
We prove convergence of depth 2 neural networks, trained via gradient descent, to a global minimum.
Our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances, adversarial labels.
They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the NTK regime''
arXiv Detail & Related papers (2022-12-05T14:47:52Z) - Grad-Align+: Empowering Gradual Network Alignment Using Attribute
Augmentation [4.536868213405015]
Network alignment (NA) is the task of discovering node correspondences across different networks.
We propose Grad-Align+, a novel NA method using node attribute augmentation.
We show that Grad-Align+ exhibits (a) superiority over benchmark NA methods, (b) empirical validation of our theoretical findings, and (c) the effectiveness of our attribute augmentation module.
arXiv Detail & Related papers (2022-08-23T15:12:12Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Learning Asymmetric Embedding for Attributed Networks via Convolutional
Neural Network [19.611523749659355]
We propose a novel deep asymmetric attributed network embedding model based on convolutional graph neural network, called AAGCN.
The main idea is to maximally preserve the asymmetric proximity and asymmetric similarity of directed attributed networks.
We test the performance of AAGCN on three real-world networks for network reconstruction, link prediction, node classification and visualization tasks.
arXiv Detail & Related papers (2022-02-13T13:35:15Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - DINE: A Framework for Deep Incomplete Network Embedding [33.97952453310253]
We propose a Deep Incomplete Network Embedding method, namely DINE.
We first complete the missing part including both nodes and edges in a partially observable network by using the expectation-maximization framework.
We evaluate DINE over three networks on multi-label classification and link prediction tasks.
arXiv Detail & Related papers (2020-08-09T04:59:35Z) - Revealing the Structure of Deep Neural Networks via Convex Duality [70.15611146583068]
We study regularized deep neural networks (DNNs) and introduce a convex analytic framework to characterize the structure of hidden layers.
We show that a set of optimal hidden layer weights for a norm regularized training problem can be explicitly found as the extreme points of a convex set.
We apply the same characterization to deep ReLU networks with whitened data and prove the same weight alignment holds.
arXiv Detail & Related papers (2020-02-22T21:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.