Graph Prolongation Convolutional Networks: Explicitly Multiscale Machine
Learning on Graphs with Applications to Modeling of Cytoskeleton
- URL: http://arxiv.org/abs/2002.05842v2
- Date: Mon, 6 Apr 2020 23:41:33 GMT
- Title: Graph Prolongation Convolutional Networks: Explicitly Multiscale Machine
Learning on Graphs with Applications to Modeling of Cytoskeleton
- Authors: C.B. Scott and Eric Mjolsness
- Abstract summary: We define a novel type of ensemble Graph Convolutional Network (GCN) model.
Using optimized linear projection operators to map between spatial scales of graph, this ensemble model learns to aggregate information from each scale for its final prediction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We define a novel type of ensemble Graph Convolutional Network (GCN) model.
Using optimized linear projection operators to map between spatial scales of
graph, this ensemble model learns to aggregate information from each scale for
its final prediction. We calculate these linear projection operators as the
infima of an objective function relating the structure matrices used for each
GCN. Equipped with these projections, our model (a Graph
Prolongation-Convolutional Network) outperforms other GCN ensemble models at
predicting the potential energy of monomer subunits in a coarse-grained
mechanochemical simulation of microtubule bending. We demonstrate these
performance gains by measuring an estimate of the FLOPs spent to train each
model, as well as wall-clock time. Because our model learns at multiple scales,
it is possible to train at each scale according to a predetermined schedule of
coarse vs. fine training. We examine several such schedules adapted from the
Algebraic Multigrid (AMG) literature, and quantify the computational benefit of
each. We also compare this model to another model which features an optimized
coarsening of the input graph. Finally, we derive backpropagation rules for the
input of our network model with respect to its output, and discuss how our
method may be extended to very large graphs.
Related papers
- Topology-Agnostic Graph U-Nets for Scalar Field Prediction on Unstructured Meshes [2.4306216325375196]
TAG U-Net is a graph convolutional network that can be trained to input any mesh or graph structure.
The model constructs coarsened versions of each input graph and performs a set of convolution and pooling operations to predict the node-wise outputs on the original graph.
arXiv Detail & Related papers (2024-10-08T22:27:35Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Creating generalizable downstream graph models with random projections [22.690120515637854]
We investigate graph representation learning approaches that enable models to generalize across graphs.
We show that using random projections to estimate multiple powers of the transition matrix allows us to build a set of isomorphism-invariant features.
The resulting features can be used to recover enough information about the local neighborhood of a node to enable inference with relevance competitive to other approaches.
arXiv Detail & Related papers (2023-02-17T14:27:00Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - GIST: Distributed Training for Large-Scale Graph Convolutional Networks [18.964079367668262]
GIST is a hybrid layer and graph sampling method, which disjointly partitions the global model into several, smaller sub-GCNs.
This distributed framework improves model performance and significantly decreases wall-clock training time.
GIST seeks to enable large-scale GCN experimentation with the goal of bridging the existing gap in scale between graph machine learning and deep learning.
arXiv Detail & Related papers (2021-02-20T19:25:38Z) - Mix Dimension in Poincar\'{e} Geometry for 3D Skeleton-based Action
Recognition [57.98278794950759]
Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data.
We present a novel spatial-temporal GCN architecture which is defined via the Poincar'e geometry.
We evaluate our method on two current largest scale 3D datasets.
arXiv Detail & Related papers (2020-07-30T18:23:18Z) - Optimal Transport Graph Neural Networks [31.191844909335963]
Current graph neural network (GNN) architectures naively average or sum node embeddings into an aggregated graph representation.
We introduce OT-GNN, a model that computes graph embeddings using parametric prototypes.
arXiv Detail & Related papers (2020-06-08T14:57:39Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.