Differentiable Graph Module (DGM) for Graph Convolutional Networks
- URL: http://arxiv.org/abs/2002.04999v4
- Date: Fri, 13 May 2022 10:41:57 GMT
- Title: Differentiable Graph Module (DGM) for Graph Convolutional Networks
- Authors: Anees Kazi, Luca Cosmo, Seyed-Ahmad Ahmadi, Nassir Navab and Michael
Bronstein
- Abstract summary: Differentiable Graph Module (DGM) is a learnable function that predicts edge probabilities in the graph which are optimal for the downstream task.
We provide an extensive evaluation of applications from the domains of healthcare (disease prediction), brain imaging (age prediction), computer graphics (3D point cloud segmentation), and computer vision (zero-shot learning)
- Score: 44.26665239213658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph deep learning has recently emerged as a powerful ML concept allowing to
generalize successful deep neural architectures to non-Euclidean structured
data. Such methods have shown promising results on a broad spectrum of
applications ranging from social science, biomedicine, and particle physics to
computer vision, graphics, and chemistry. One of the limitations of the
majority of current graph neural network architectures is that they are often
restricted to the transductive setting and rely on the assumption that the
underlying graph is {\em known} and {\em fixed}. Often, this assumption is not
true since the graph may be noisy, or partially and even completely unknown. In
such cases, it would be helpful to infer the graph directly from the data,
especially in inductive settings where some nodes were not present in the graph
at training time. Furthermore, learning a graph may become an end in itself, as
the inferred structure may provide complementary insights next to the
downstream task. In this paper, we introduce Differentiable Graph Module (DGM),
a learnable function that predicts edge probabilities in the graph which are
optimal for the downstream task. DGM can be combined with convolutional graph
neural network layers and trained in an end-to-end fashion. We provide an
extensive evaluation of applications from the domains of healthcare (disease
prediction), brain imaging (age prediction), computer graphics (3D point cloud
segmentation), and computer vision (zero-shot learning). We show that our model
provides a significant improvement over baselines both in transductive and
inductive settings and achieves state-of-the-art results.
Related papers
- State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Learning Graph Representations [0.0]
Graph Neural Networks (GNNs) are efficient ways to get insight into large dynamic graph datasets.
In this paper, we discuss the graph convolutional neural networks graph autoencoders and Social-temporal graph neural networks.
arXiv Detail & Related papers (2021-02-03T12:07:55Z) - Generating a Doppelganger Graph: Resembling but Distinct [5.618335078130568]
We propose an approach to generating a doppelganger graph that resembles a given one in many graph properties.
The approach is an orchestration of graph representation learning, generative adversarial networks, and graph realization algorithms.
arXiv Detail & Related papers (2021-01-23T22:08:27Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Unsupervised Graph Representation by Periphery and Hierarchical
Information Maximization [18.7475578342125]
Invent of graph neural networks has improved the state-of-the-art for both node and the entire graph representation in a vector space.
For the entire graph representation, most of existing graph neural networks are trained on a graph classification loss in a supervised way.
We propose an unsupervised graph neural network to generate a vector representation of an entire graph in this paper.
arXiv Detail & Related papers (2020-06-08T15:50:40Z) - Latent-Graph Learning for Disease Prediction [44.26665239213658]
We show that it is possible to learn a single, optimal graph towards the GCN's downstream task of disease classification.
Unlike commonly employed spectral GCN approaches, our GCN is spatial and inductive, and can thus infer previously unseen patients as well.
arXiv Detail & Related papers (2020-03-27T08:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.