Distributed Graph Learning with Smooth Data Priors
- URL: http://arxiv.org/abs/2112.05887v1
- Date: Sat, 11 Dec 2021 00:52:02 GMT
- Title: Distributed Graph Learning with Smooth Data Priors
- Authors: Isabela Cunha Maia Nobre, Mireille El Gheche, Pascal Frossard
- Abstract summary: We propose a novel distributed graph learning algorithm, which permits to infer a graph from signal observations on the nodes.
Our results show that the distributed approach has a lower communication cost than a centralised algorithm without compromising the accuracy in the inferred graph.
- Score: 61.405131495287755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph learning is often a necessary step in processing or representing
structured data, when the underlying graph is not given explicitly. Graph
learning is generally performed centrally with a full knowledge of the graph
signals, namely the data that lives on the graph nodes. However, there are
settings where data cannot be collected easily or only with a non-negligible
communication cost. In such cases, distributed processing appears as a natural
solution, where the data stays mostly local and all processing is performed
among neighbours nodes on the communication graph. We propose here a novel
distributed graph learning algorithm, which permits to infer a graph from
signal observations on the nodes under the assumption that the data is smooth
on the target graph. We solve a distributed optimization problem with local
projection constraints to infer a valid graph while limiting the communication
costs. Our results show that the distributed approach has a lower communication
cost than a centralised algorithm without compromising the accuracy in the
inferred graph. It also scales better in communication costs with the increase
of the network size, especially for sparse networks.
Related papers
- Online Network Inference from Graph-Stationary Signals with Hidden Nodes [31.927912288598467]
We present a novel method for online graph estimation that accounts for the presence of hidden nodes.
We then formulate a convex optimization problem for graph learning from streaming, incomplete graph signals.
arXiv Detail & Related papers (2024-09-13T12:09:09Z) - Task-Oriented Communication for Graph Data: A Graph Information Bottleneck Approach [12.451324619122405]
This paper introduces a method to extract a smaller, task-focused subgraph that maintains key information while reducing communication overhead.
Our approach utilizes graph neural networks (GNNs) and the graph information bottleneck (GIB) principle to create a compact, informative, and robust graph representation suitable for transmission.
arXiv Detail & Related papers (2024-09-04T14:01:56Z) - Federated Learning over Coupled Graphs [39.86903030911785]
Federated Learning (FL) has been proposed to solve the data isolation issue, mainly for Euclidean data.
We propose a novel FL framework for graph data, FedCog, to efficiently handle coupled graphs that are a kind of distributed graph data.
arXiv Detail & Related papers (2023-01-26T13:43:26Z) - Graph Learning Across Data Silos [12.343382413705394]
We consider the problem of inferring graph topology from smooth graph signals in a novel but practical scenario.
Data are located in distributed clients and prohibited from leaving local clients due to factors such as privacy concerns.
We propose an auto-weighted multiple graph learning model to jointly learn a personalized graph for each local client and a single consensus graph for all clients.
arXiv Detail & Related papers (2023-01-17T02:14:57Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - OOD-GNN: Out-of-Distribution Generalized Graph Neural Network [73.67049248445277]
Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution.
Existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data.
We propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs.
arXiv Detail & Related papers (2021-12-07T16:29:10Z) - Distributed Training of Graph Convolutional Networks using Subgraph
Approximation [72.89940126490715]
We propose a training strategy that mitigates the lost information across multiple partitions of a graph through a subgraph approximation scheme.
The subgraph approximation approach helps the distributed training system converge at single-machine accuracy.
arXiv Detail & Related papers (2020-12-09T09:23:49Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.