Greenformer: Factorization Toolkit for Efficient Deep Neural Networks
- URL: http://arxiv.org/abs/2109.06762v1
- Date: Tue, 14 Sep 2021 15:27:05 GMT
- Title: Greenformer: Factorization Toolkit for Efficient Deep Neural Networks
- Authors: Samuel Cahyawijaya, Genta Indra Winata, Holy Lovenia, Bryan Wilie,
Wenliang Dai, Etsuko Ishii, Pascale Fung
- Abstract summary: Greenformer is a toolkit to accelerate the computation of neural networks through matrix factorization.
Our experimental results show that Greenformer is effective for a wide range of scenarios.
- Score: 35.47418512373472
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While the recent advances in deep neural networks (DNN) bring remarkable
success, the computational cost also increases considerably. In this paper, we
introduce Greenformer, a toolkit to accelerate the computation of neural
networks through matrix factorization while maintaining performance.
Greenformer can be easily applied with a single line of code to any DNN model.
Our experimental results show that Greenformer is effective for a wide range of
scenarios. We provide the showcase of Greenformer at
https://samuelcahyawijaya.github.io/greenformer-demo/.
Related papers
- Green Multigrid Network [6.397295511397678]
GreenLearning networks (GL) learn Green's function in physical space, making them an interpretable model for capturing unknown solution operators of partial differential equations (PDEs)
We propose a framework named Green Multigrid networks (GreenMGNet), an operator learning algorithm designed for a class of singularityally smooth Green's functions.
Compared with the pioneering GL, the new framework presents itself with better accuracy and efficiency, thereby achieving a significant improvement.
arXiv Detail & Related papers (2024-07-04T03:02:10Z) - LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation [51.552170474958736]
We propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning.
LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN.
Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks.
arXiv Detail & Related papers (2023-02-03T02:33:07Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Accelerating Large Scale Real-Time GNN Inference using Channel Pruning [7.8799581908375185]
Graph Neural Networks (GNNs) are proven to be powerful models to generate node embedding for downstream applications.
However, due to the high computation complexity of GNN inference, it is hard to deploy GNNs for large-scale or real-time applications.
We propose to accelerate GNN inference by pruning the dimensions in each layer with negligible accuracy loss.
arXiv Detail & Related papers (2021-05-10T17:28:44Z) - Boost then Convolve: Gradient Boosting Meets Graph Neural Networks [6.888700669980625]
We show that gradient boosted decision trees (GBDT) often outperform other machine learning methods when faced with heterogeneous data.
We propose a novel architecture that trains GBDT and GNN jointly to get the best of both worlds.
Our model benefits from end-to-end optimization by allowing new trees to fit the gradient updates of GNN.
arXiv Detail & Related papers (2021-01-21T10:46:41Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Graph Neural Network for Large-Scale Network Localization [35.29322617956428]
Graph neural networks (GNNs) are popular to use for classifying structured data in the context of machine learning.
In this work, we adopt GNN for a classic but challenging nonlinear regression problem, namely the network localization.
Our main findings are in order. First, GNN is potentially the best solution to large-scale network localization in terms of accuracy, robustness and computational time.
arXiv Detail & Related papers (2020-10-22T12:39:26Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z) - Adaptive Explainable Neural Networks (AxNNs) [8.949704905866888]
We develop a new framework called Adaptive Explainable Neural Networks (AxNN) for achieving the dual goals of good predictive performance and model interpretability.
For predictive performance, we build a structured neural network made up of ensembles of generalized additive model networks and additive index models.
For interpretability, we show how to decompose the results of AxNN into main effects and higher-order interaction effects.
arXiv Detail & Related papers (2020-04-05T23:40:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.