Node Feature Augmentation Vitaminizes Network Alignment
- URL: http://arxiv.org/abs/2304.12751v4
- Date: Fri, 17 May 2024 12:15:08 GMT
- Title: Node Feature Augmentation Vitaminizes Network Alignment
- Authors: Jin-Duk Park, Cong Tran, Won-Yong Shin, Xin Cao,
- Abstract summary: Network alignment (NA) is the task of discovering node correspondences across multiple networks.
We propose Grad-Align+, a novel NA method built upon a recent state-of-the-art NA method, the so-called Grad-Align.
Grad-Align+ consists of three key components: 1)-based node feature augmentation (CNFA), 2) graph neural network (GNN)-aided embedding similarity calculation alongside the augmented node features, and 3) gradual NA with similarity calculation using aligned cross-network neighbor-pairs (ACNs)
- Score: 13.52901288497192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network alignment (NA) is the task of discovering node correspondences across multiple networks. Although NA methods have achieved remarkable success in a myriad of scenarios, their effectiveness is not without additional information such as prior anchor links and/or node features, which may not always be available due to privacy concerns or access restrictions. To tackle this challenge, we propose Grad-Align+, a novel NA method built upon a recent state-of-the-art NA method, the so-called Grad-Align, that gradually discovers a part of node pairs until all node pairs are found. In designing Grad-Align+, we account for how to augment node features in the sense of performing the NA task and how to design our NA method by maximally exploiting the augmented node features. To achieve this goal, Grad-Align+ consists of three key components: 1) centrality-based node feature augmentation (CNFA), 2) graph neural network (GNN)-aided embedding similarity calculation alongside the augmented node features, and 3) gradual NA with similarity calculation using aligned cross-network neighbor-pairs (ACNs). Through comprehensive experiments, we demonstrate that Grad-Align+ exhibits (a) the superiority over benchmark NA methods, (b) empirical validations as well as our theoretical findings to see the effectiveness of CNFA, (c) the influence of each component, (d) the robustness to network noises, and (e) the computational efficiency.
Related papers
- Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - A Topological Perspective on Demystifying GNN-Based Link Prediction
Performance [72.06314265776683]
Topological Concentration (TC) is based on the intersection of the local subgraph of each node with the ones of its neighbors.
We show that TC has a higher correlation with LP performance than other node-level topological metrics like degree and subgraph density.
We propose Approximated Topological Concentration (ATC) and theoretically/empirically justify its efficacy in approximating TC and reducing the complexity.
arXiv Detail & Related papers (2023-10-06T22:07:49Z) - Collaborative Graph Neural Networks for Attributed Network Embedding [63.39495932900291]
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
We propose COllaborative graph Neural Networks--CONN, a tailored GNN architecture for network embedding.
arXiv Detail & Related papers (2023-07-22T04:52:27Z) - Grad-Align+: Empowering Gradual Network Alignment Using Attribute
Augmentation [4.536868213405015]
Network alignment (NA) is the task of discovering node correspondences across different networks.
We propose Grad-Align+, a novel NA method using node attribute augmentation.
We show that Grad-Align+ exhibits (a) superiority over benchmark NA methods, (b) empirical validation of our theoretical findings, and (c) the effectiveness of our attribute augmentation module.
arXiv Detail & Related papers (2022-08-23T15:12:12Z) - What Do Graph Convolutional Neural Networks Learn? [0.0]
Graph Convolutional Neural Networks (GCN) are a common variant of Graph neural networks (GNNs)
Recent literature has highlighted that GCNs can achieve strong performance on heterophilous graphs under certain "special conditions"
Our investigation on underlying graph structures of a dataset finds that a GCN's SSNC performance is significantly influenced by the consistency and uniqueness in neighborhood structure of nodes within a class.
arXiv Detail & Related papers (2022-07-05T06:44:37Z) - On the Power of Gradual Network Alignment Using Dual-Perception
Similarities [14.779474659172923]
Network alignment (NA) is the task of finding the correspondence of nodes between two networks based on the network structure and node attributes.
Our study is motivated by the fact that, since most of existing NA methods have attempted to discover all node pairs at once, they do not harness information enriched through interim discovery of node correspondences.
We propose Grad-Align, a new NA method that gradually discovers node pairs by making full use of node pairs exhibiting strong consistency.
arXiv Detail & Related papers (2022-01-26T14:01:32Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Node Similarity Preserving Graph Convolutional Networks [51.520749924844054]
Graph Neural Networks (GNNs) explore the graph structure and node features by aggregating and transforming information within node neighborhoods.
We propose SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure.
We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs.
arXiv Detail & Related papers (2020-11-19T04:18:01Z) - DINE: A Framework for Deep Incomplete Network Embedding [33.97952453310253]
We propose a Deep Incomplete Network Embedding method, namely DINE.
We first complete the missing part including both nodes and edges in a partially observable network by using the expectation-maximization framework.
We evaluate DINE over three networks on multi-label classification and link prediction tasks.
arXiv Detail & Related papers (2020-08-09T04:59:35Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.