Cooperative Network Learning for Large-Scale and Decentralized Graphs
- URL: http://arxiv.org/abs/2311.02117v2
- Date: Tue, 7 Nov 2023 08:50:24 GMT
- Title: Cooperative Network Learning for Large-Scale and Decentralized Graphs
- Authors: Qiang Wu, Yiming Huang, Yujie Zeng, Yijie Teng, Fang Zhou, Linyuan
L\"u
- Abstract summary: We introduce a Cooperative Network Learning (CNL) framework to ensure secure graph computing for various graph tasks.
CNL unifies the local and global perspectives of GNN computing with distributed data for an agency.
We hope this framework will address privacy concerns in graph-related research and integrate decentralized graph data structures.
- Score: 7.628975821850447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph research, the systematic study of interconnected data points
represented as graphs, plays a vital role in capturing intricate relationships
within networked systems. However, in the real world, as graphs scale up,
concerns about data security among different data-owning agencies arise,
hindering information sharing and, ultimately, the utilization of graph data.
Therefore, establishing a mutual trust mechanism among graph agencies is
crucial for unlocking the full potential of graphs. Here, we introduce a
Cooperative Network Learning (CNL) framework to ensure secure graph computing
for various graph tasks. Essentially, this CNL framework unifies the local and
global perspectives of GNN computing with distributed data for an agency by
virtually connecting all participating agencies as a global graph without a
fixed central coordinator. Inter-agency computing is protected by various
technologies inherent in our framework, including homomorphic encryption and
secure transmission. Moreover, each agency has a fair right to design or employ
various graph learning models from its local or global perspective. Thus, CNL
can collaboratively train GNN models based on decentralized graphs inferred
from local and global graphs. Experiments on contagion dynamics prediction and
traditional graph tasks (i.e., node classification and link prediction)
demonstrate that our CNL architecture outperforms state-of-the-art GNNs
developed at individual sites, revealing that CNL can provide a reliable, fair,
secure, privacy-preserving, and global perspective to build effective and
personalized models for network applications. We hope this framework will
address privacy concerns in graph-related research and integrate decentralized
graph data structures to benefit the network research community in cooperation
and innovation.
Related papers
- Federated Temporal Graph Clustering [9.779760673367663]
Temporal graph clustering is a complex task that involves discovering meaningful structures in dynamic graphs where relationships and entities change over time.
Existing methods typically require centralized data collection, which poses significant privacy and communication challenges.
We introduce a novel Federated Temporal Graph Clustering framework that enables decentralized training of graph neural networks (GNNs) across multiple clients.
arXiv Detail & Related papers (2024-10-16T08:04:57Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural
Networks [40.5420021584431]
Federated training of Graph Neural Networks (GNN) has become popular in recent years due to its ability to perform graph-related tasks under data isolation scenarios.
graph heterogeneity issues in federated GNN systems continue to pose challenges.
We propose FedGKD, a novel federated GNN framework that utilizes a novel client-side graph dataset distillation method.
arXiv Detail & Related papers (2023-09-18T06:55:14Z) - Lumos: Heterogeneity-aware Federated Graph Learning over Decentralized
Devices [19.27111697495379]
Graph neural networks (GNNs) have been widely deployed in real-world networked applications and systems.
We propose the first federated GNN framework called Lumos that supports supervised and unsupervised learning.
Based on the constructed tree for each client, a decentralized tree-based GNN trainer is proposed to support versatile training.
arXiv Detail & Related papers (2023-03-01T13:27:06Z) - FedEgo: Privacy-preserving Personalized Federated Graph Learning with
Ego-graphs [22.649780281947837]
In some practical scenarios, graph data are stored separately in multiple distributed parties, which may not be directly shared due to conflicts of interest.
We propose FedEgo, a federated graph learning framework based on ego-graphs to tackle the challenges above.
arXiv Detail & Related papers (2022-08-29T15:47:36Z) - Graph Representation Learning via Contrasting Cluster Assignments [57.87743170674533]
We propose a novel unsupervised graph representation model by contrasting cluster assignments, called as GRCCA.
It is motivated to make good use of local and global information synthetically through combining clustering algorithms and contrastive learning.
GRCCA has strong competitiveness in most tasks.
arXiv Detail & Related papers (2021-12-15T07:28:58Z) - FedGL: Federated Graph Learning Framework with Global Self-Supervision [22.124339267195822]
FedGL is capable of obtaining a high-quality global graph model while protecting data privacy.
The global self-supervision enables the information of each client to flow and share in a privacy-preserving manner.
arXiv Detail & Related papers (2021-05-07T11:27:23Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.