Learning Graph Structure from Convolutional Mixtures
- URL: http://arxiv.org/abs/2205.09575v1
- Date: Thu, 19 May 2022 14:08:15 GMT
- Title: Learning Graph Structure from Convolutional Mixtures
- Authors: Max Wasserman, Saurabh Sihag, Gonzalo Mateos, Alejandro Ribeiro
- Abstract summary: We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
- Score: 119.45320143101381
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning frameworks such as graph neural networks typically rely on a
given, fixed graph to exploit relational inductive biases and thus effectively
learn from network data. However, when said graphs are (partially) unobserved,
noisy, or dynamic, the problem of inferring graph structure from data becomes
relevant. In this paper, we postulate a graph convolutional relationship
between the observed and latent graphs, and formulate the graph learning task
as a network inverse (deconvolution) problem. In lieu of
eigendecomposition-based spectral methods or iterative optimization solutions,
we unroll and truncate proximal gradient iterations to arrive at a
parameterized neural network architecture that we call a Graph Deconvolution
Network (GDN). GDNs can learn a distribution of graphs in a supervised fashion,
perform link prediction or edge-weight regression tasks by adapting the loss
function, and they are inherently inductive. We corroborate GDN's superior
graph recovery performance and its generalization to larger graphs using
synthetic data in supervised settings. Furthermore, we demonstrate the
robustness and representation power of GDNs on real world neuroimaging and
social network datasets.
Related papers
- Unitary convolutions for learning on graphs and groups [0.9899763598214121]
We study unitary group convolutions, which allow for deeper networks that are more stable during training.
The main focus of the paper are graph neural networks, where we show that unitary graph convolutions provably avoid over-smoothing.
Our experimental results confirm that unitary graph convolutional networks achieve competitive performance on benchmark datasets.
arXiv Detail & Related papers (2024-10-07T21:09:14Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Graph Signal Processing -- Part III: Machine Learning on Graphs, from
Graph Topology to Applications [19.29066508374268]
Part III of this monograph starts by addressing ways to learn graph topology.
A particular emphasis is on graph topology definition based on the correlation and precision matrices of the observed data.
For learning sparse graphs, the least absolute shrinkage and selection operator, known as LASSO is employed.
arXiv Detail & Related papers (2020-01-02T13:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.