GraphChallenge.org Sparse Deep Neural Network Performance
- URL: http://arxiv.org/abs/2004.01181v2
- Date: Mon, 6 Apr 2020 02:38:52 GMT
- Title: GraphChallenge.org Sparse Deep Neural Network Performance
- Authors: Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren
Milechin, Albert Reuther, Ryan Robinett, Sid Samsi
- Abstract summary: The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data.
The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems.
- Score: 8.685102575397874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to
developing new solutions for analyzing graphs and sparse data. Sparse AI
analytics present unique scalability difficulties. The Sparse Deep Neural
Network (DNN) Challenge draws upon prior challenges from machine learning, high
performance computing, and visual analytics to create a challenge that is
reflective of emerging sparse AI systems. The sparse DNN challenge is based on
a mathematically well-defined DNN inference computation and can be implemented
in any programming environment. In 2019 several sparse DNN challenge
submissions were received from a wide range of authors and organizations. This
paper presents a performance analysis of the best performers of these
submissions. These submissions show that their state-of-the-art sparse DNN
execution time, $T_{\rm DNN}$, is a strong function of the number of DNN
operations performed, $N_{\rm op}$. The sparse DNN challenge provides a clear
picture of current sparse DNN systems and underscores the need for new
innovations to achieve high performance on very large sparse DNNs.
Related papers
- Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - Edge AI as a Service with Coordinated Deep Neural Networks [0.24578723416255746]
CoDE aims to find the optimal path, which is the path with the highest possible reward, by creating multi-task DNNs from individual models.
Experiments show that CoDE enhances the inference throughput and, achieves higher precision compared to a state-of-the-art existing method.
arXiv Detail & Related papers (2024-01-01T01:54:53Z) - LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation [51.552170474958736]
We propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning.
LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN.
Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks.
arXiv Detail & Related papers (2023-02-03T02:33:07Z) - You Can Have Better Graph Neural Networks by Not Training Weights at
All: Finding Untrained GNNs Tickets [105.24703398193843]
Untrainedworks in graph neural networks (GNNs) still remains mysterious.
We show that the found untrainedworks can substantially mitigate the GNN over-smoothing problem.
We also observe that such sparse untrainedworks have appealing performance in out-of-distribution detection and robustness of input perturbations.
arXiv Detail & Related papers (2022-11-28T14:17:36Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Making a Spiking Net Work: Robust brain-like unsupervised machine
learning [0.0]
Spiking Neural Networks (SNNs) are an alternative to Artificial Neural Networks (ANNs)
SNNs struggle with dynamical stability and cannot match the accuracy of ANNs.
We show how an SNN can overcome many of the shortcomings that have been identified in the literature.
arXiv Detail & Related papers (2022-08-02T02:10:00Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - SpikeMS: Deep Spiking Neural Network for Motion Segmentation [7.491944503744111]
textitSpikeMS is the first deep encoder-decoder SNN architecture for the real-world large-scale problem of motion segmentation.
We show that textitSpikeMS is capable of textitincremental predictions, or predictions from smaller amounts of test data than it is trained on.
arXiv Detail & Related papers (2021-05-13T21:34:55Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - SyReNN: A Tool for Analyzing Deep Neural Networks [8.55884254206878]
Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains.
This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation.
arXiv Detail & Related papers (2021-01-09T00:27:23Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.