Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions
- URL: http://arxiv.org/abs/2104.01481v1
- Date: Sat, 3 Apr 2021 20:54:36 GMT
- Title: Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions
- Authors: Shyam A. Tailor, Felix L. Opolka, Pietro Li\`o, Nicholas D. Lane
- Abstract summary: We present state-of-the-art performance with lower memory consumption and latency, along with characteristics suited to accelerator implementation.
Our proposal uses memory proportional to the number of vertices in the graph, in contrast to competing methods which require memory proportional to the number of edges.
We propose aggregator fusion, a technique to enable GNNs to significantly boost their representational power, with only a small increase in latency of 19% over standard sparse matrix multiplication.
- Score: 11.769185588579488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training and deploying graph neural networks (GNNs) remains difficult due to
their high memory consumption and inference latency. In this work we present a
new type of GNN architecture that achieves state-of-the-art performance with
lower memory consumption and latency, along with characteristics suited to
accelerator implementation. Our proposal uses memory proportional to the number
of vertices in the graph, in contrast to competing methods which require memory
proportional to the number of edges; we find our efficient approach actually
achieves higher accuracy than competing approaches across 5 large and varied
datasets against strong baselines. We achieve our results by using a novel
adaptive filtering approach inspired by signal processing; it can be
interpreted as enabling each vertex to have its own weight matrix, and is not
related to attention. Following our focus on efficient hardware usage, we
propose aggregator fusion, a technique to enable GNNs to significantly boost
their representational power, with only a small increase in latency of 19% over
standard sparse matrix multiplication. Code and pretrained models can be found
at this URL: https://github.com/shyam196/egc.
Related papers
- Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels
on GPUs [26.607519045805745]
Graph neural networks (GNNs) are powerful tools for exploring and learning from graph structures and features.
Prior works have proposed to explore the sparsity in the input graph to accelerate GNNs, which uses the full-graph-level or block-level sparsity format.
We show that they fail to balance the sparsity benefit and kernel execution efficiency.
We propose a novel system, referred to as AdaptGear, that addresses the challenge of optimizing GNNs performance.
arXiv Detail & Related papers (2023-05-27T08:22:12Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration [14.958793135751149]
Convolutional neural network (CNN) inference on mobile devices demands efficient hardware acceleration of low-precision (INT8) general matrix multiplication (GEMM)
Exploiting data sparsity is a common approach to further accelerate GEMM for CNN inference, and in particular, structural sparsity has the advantages of predictable load balancing and very low index overhead.
We address a key architectural challenge with structural sparsity: how to provide support for a range of sparsity levels while maintaining high utilization of the hardware.
arXiv Detail & Related papers (2020-09-04T20:17:42Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.