SplitGNN: Splitting GNN for Node Classification with Heterogeneous
Attention
- URL: http://arxiv.org/abs/2301.12885v1
- Date: Fri, 27 Jan 2023 12:08:44 GMT
- Title: SplitGNN: Splitting GNN for Node Classification with Heterogeneous
Attention
- Authors: Xiaolong Xu and Lingjuan Lyu and Yihong Dong and Yicheng Lu and
Weiqiang Wang and Hong Jin
- Abstract summary: We propose a split learning-based graph neural network (SplitGNN) for graph computation.
Our SplitGNN allows the isolated heterogeneous neighborhood to be collaboratively utilized.
We demonstrate the effectiveness of our SplitGNN on node classification tasks for two standard public datasets and the real-world dataset.
- Score: 29.307331758493323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the frequent happening of privacy leakage and the enactment of privacy
laws across different countries, data owners are reluctant to directly share
their raw data and labels with any other party. In reality, a lot of these raw
data are stored in the graph database, especially for finance. For
collaboratively building graph neural networks(GNNs), federated learning(FL)
may not be an ideal choice for the vertically partitioned setting where privacy
and efficiency are the main concerns. Moreover, almost all the existing
federated GNNs are mainly designed for homogeneous graphs, which simplify
various types of relations as the same type, thus largely limits their
performance. We bridge this gap by proposing a split learning-based
GNN(SplitGNN), where this model is divided into two sub-models: the local GNN
model includes all the private data related computation to generate local node
embeddings, whereas the global model calculates global embeddings by
aggregating all the participants' local embeddings. Our SplitGNN allows the
isolated heterogeneous neighborhood to be collaboratively utilized. To better
capture representations, we propose a novel Heterogeneous Attention(HAT)
algorithm and use both node-based and path-based attention mechanisms to learn
various types of nodes and edges with multi-hop relation features. We
demonstrate the effectiveness of our SplitGNN on node classification tasks for
two standard public datasets and the real-world dataset. Extensive experimental
results validate that our proposed SplitGNN significantly outperforms the
state-of-the-art(SOTA) methods.
Related papers
- FedHGN: A Federated Framework for Heterogeneous Graph Neural Networks [45.94642721490744]
Heterogeneous graph neural networks (HGNNs) can learn from typed and relational graph data more effectively than conventional GNNs.
With larger parameter spaces, HGNNs may require more training data, which is often scarce in real-world applications due to privacy regulations.
We propose FedHGN, a novel and general FGL framework for HGNNs.
arXiv Detail & Related papers (2023-05-16T18:01:49Z) - LSGNN: Towards General Graph Neural Network in Node Classification by
Local Similarity [59.41119013018377]
We propose to use the local similarity (LocalSim) to learn node-level weighted fusion, which can also serve as a plug-and-play module.
For better fusion, we propose a novel and efficient Initial Residual Difference Connection (IRDC) to extract more informative multi-hop information.
Our proposed method, namely Local Similarity Graph Neural Network (LSGNN), can offer comparable or superior state-of-the-art performance on both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2023-05-07T09:06:11Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z) - ASFGNN: Automated Separated-Federated Graph Neural Network [17.817867271722093]
We propose an automated Separated-Federated Graph Neural Network (ASFGNN) learning paradigm.
We conduct experiments on benchmark datasets and the results demonstrate that ASFGNN significantly outperforms the naive federated GNN.
arXiv Detail & Related papers (2020-11-06T09:21:34Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Vertically Federated Graph Neural Network for Privacy-Preserving Node
Classification [39.53937689989282]
VFGNN is a learning paradigm for privacy-preserving node classification task under data vertically partitioned setting.
We leave the private data related computations on data holders, and delegate the rest of computations to a semi-honest server.
We conduct experiments on three benchmarks and the results demonstrate the effectiveness of VFGNN.
arXiv Detail & Related papers (2020-05-25T03:12:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.