GraphMDN: Leveraging graph structure and deep learning to solve inverse
problems
- URL: http://arxiv.org/abs/2010.13668v1
- Date: Mon, 26 Oct 2020 15:44:22 GMT
- Title: GraphMDN: Leveraging graph structure and deep learning to solve inverse
problems
- Authors: Tuomas P. Oikarinen (1), Daniel C. Hannah (2), Sohrob Kazerounian (2)
((1) Massachusetts Institute of Technology, (2) Vectra AI)
- Abstract summary: We develop a Graph Mixture Density Network (GraphMDN), which combines graph neural networks with mixture density network (MDN) outputs.
GraphMDNs excel on regression tasks wherein the data are graph structured, and target statistics are better represented by mixtures of densities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent introduction of Graph Neural Networks (GNNs) and their growing
popularity in the past few years has enabled the application of deep learning
algorithms to non-Euclidean, graph-structured data. GNNs have achieved
state-of-the-art results across an impressive array of graph-based machine
learning problems. Nevertheless, despite their rapid pace of development, much
of the work on GNNs has focused on graph classification and embedding
techniques, largely ignoring regression tasks over graph data. In this paper,
we develop a Graph Mixture Density Network (GraphMDN), which combines graph
neural networks with mixture density network (MDN) outputs. By combining these
techniques, GraphMDNs have the advantage of naturally being able to incorporate
graph structured information into a neural architecture, as well as the ability
to model multi-modal regression targets. As such, GraphMDNs are designed to
excel on regression tasks wherein the data are graph structured, and target
statistics are better represented by mixtures of densities rather than singular
values (so-called ``inverse problems"). To demonstrate this, we extend an
existing GNN architecture known as Semantic GCN (SemGCN) to a GraphMDN
structure, and show results from the Human3.6M pose estimation task. The
extended model consistently outperforms both GCN and MDN architectures on their
own, with a comparable number of parameters.
Related papers
- Ensemble Learning for Graph Neural Networks [28.3650473174488]
Graph Neural Networks (GNNs) have shown success in various fields for learning from graph-structured data.
This paper investigates the application of ensemble learning techniques to improve the performance and robustness of GNNs.
arXiv Detail & Related papers (2023-10-22T03:55:13Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Learning to Evolve on Dynamic Graphs [5.1521870302904125]
Learning to Evolve on Dynamic Graphs (LEDG) is a novel algorithm that jointly learns graph information and time information.
LEDG is model-agnostic and can train any message passing based graph neural network (GNN) on dynamic graphs.
arXiv Detail & Related papers (2021-11-13T04:09:30Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - GraphTheta: A Distributed Graph Neural Network Learning System With
Flexible Training Strategy [5.466414428765544]
We present a new distributed graph learning system GraphTheta.
It supports multiple training strategies and enables efficient and scalable learning on big graphs.
This work represents the largest edge-attributed GNN learning task conducted on a billion-scale network in the literature.
arXiv Detail & Related papers (2021-04-21T14:51:33Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Incomplete Graph Representation and Learning via Partial Graph Neural
Networks [7.227805463462352]
In many applications, graph may be coming in an incomplete form where attributes of graph nodes are partially unknown/missing.
Existing GNNs are generally designed on complete graphs which can not deal with attribute-incomplete graph data directly.
We develop a novel partial aggregation based GNNs, named Partial Graph Neural Networks (PaGNNs) for attribute-incomplete graph representation and learning.
arXiv Detail & Related papers (2020-03-23T08:29:59Z) - Gated Graph Recurrent Neural Networks [176.3960927323358]
We introduce Graph Recurrent Neural Networks (GRNNs) as a general learning framework for graph processes.
To address the problem of vanishing gradients, we put forward GRNNs with three different gating mechanisms: time, node and edge gates.
The numerical results also show that GRNNs outperform GNNs and RNNs, highlighting the importance of taking both the temporal and graph structures of a graph process into account.
arXiv Detail & Related papers (2020-02-03T22:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.