Towards Efficient Visual Simplification of Computational Graphs in Deep
Neural Networks
- URL: http://arxiv.org/abs/2212.10774v1
- Date: Wed, 21 Dec 2022 05:17:13 GMT
- Title: Towards Efficient Visual Simplification of Computational Graphs in Deep
Neural Networks
- Authors: Rusheng Pan, Zhiyong Wang, Yating Wei, Han Gao, Gongchang Ou, Caleb
Chen Cao, Jingli Xu, Tong Xu and Wei Chen
- Abstract summary: A computational graph in a deep neural network (DNN) denotes a specific data flow diagram (DFD) composed of many tensors and operators.
Existing toolkits for visualizing computational graphs are not applicable when the structure is highly complicated and large-scale.
We propose a suite of visual simplification techniques, including a cycle-removing method, a module-based edge-pruning algorithm, and an isomorphic subgraph stacking strategy.
- Score: 16.79256014407765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A computational graph in a deep neural network (DNN) denotes a specific data
flow diagram (DFD) composed of many tensors and operators. Existing toolkits
for visualizing computational graphs are not applicable when the structure is
highly complicated and large-scale (e.g., BERT [1]). To address this problem,
we propose leveraging a suite of visual simplification techniques, including a
cycle-removing method, a module-based edge-pruning algorithm, and an isomorphic
subgraph stacking strategy. We design and implement an interactive
visualization system that is suitable for computational graphs with up to 10
thousand elements. Experimental results and usage scenarios demonstrate that
our tool reduces 60% elements on average and hence enhances the performance for
recognizing and diagnosing DNN models. Our contributions are integrated into an
open-source DNN visualization toolkit, namely, MindInsight [2].
Related papers
- Towards Efficient Training of Graph Neural Networks: A Multiscale Approach [20.713913005905297]
Graph Neural Networks (GNNs) have emerged as a powerful tool for learning and inferring from graph-structured data.
We introduce a novel framework for efficient multiscale training of GNNs, designed to integrate information across multiscale representations of a graph.
arXiv Detail & Related papers (2025-03-25T13:52:26Z) - Distributed Graph Neural Network Inference With Just-In-Time Compilation For Industry-Scale Graphs [6.924892368183222]
Graph neural networks (GNNs) have delivered remarkable results in various fields.
The rapid increase in the scale of graph data has introduced significant performance bottlenecks for GNN inference.
This paper introduces an innovative processing paradgim for distributed graph learning that abstracts GNNs with a new set of programming interfaces.
arXiv Detail & Related papers (2025-03-08T13:26:59Z) - Exact Computation of Any-Order Shapley Interactions for Graph Neural Networks [53.10674067060148]
Shapley Interactions (SIs) quantify node contributions and interactions among multiple nodes.
By exploiting the GNN architecture, we show that the structure of interactions in node embeddings are preserved for graph prediction.
We introduce GraphSHAP-IQ, an efficient approach to compute any-order SIs exactly.
arXiv Detail & Related papers (2025-01-28T13:37:44Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)
This framework provides a standardized setting to evaluate GNNs across diverse datasets.
We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph Convolution and Pooling [25.555741218526464]
Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks.
We propose a concatenation-based graph convolution mechanism that injectively updates node representations.
We also design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner.
arXiv Detail & Related papers (2024-04-21T13:11:59Z) - CONCORD: Towards a DSL for Configurable Graph Code Representation [3.756550107432323]
We introduce CONCORD, a domain-specific language to build customizable graph representations.
We demonstrate its effectiveness in code smell detection as an illustrative use case.
ConCORD will help researchers create and experiment with customizable graph-based code representations.
arXiv Detail & Related papers (2024-01-31T16:16:48Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Layer-wise training for self-supervised learning on graphs [0.0]
End-to-end training of graph neural networks (GNN) on large graphs presents several memory and computational challenges.
We propose Layer-wise Regularized Graph Infomax, an algorithm to train GNNs layer by layer in a self-supervised manner.
arXiv Detail & Related papers (2023-09-04T10:23:39Z) - SCARA: Scalable Graph Neural Networks with Feature-Oriented Optimization [23.609017952951454]
We propose SCARA, a scalable Graph Neural Network (GNN) with feature-oriented optimization for graph computation.
SCARA efficiently computes graph embedding from node features, and further selects and reuses feature results to reduce overhead.
It is efficient to process precomputation on the largest available billion-scale GNN dataset Papers100M (111M nodes, 1.6B edges) in 100 seconds.
arXiv Detail & Related papers (2022-07-19T10:32:11Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - GraphTheta: A Distributed Graph Neural Network Learning System With
Flexible Training Strategy [5.466414428765544]
We present a new distributed graph learning system GraphTheta.
It supports multiple training strategies and enables efficient and scalable learning on big graphs.
This work represents the largest edge-attributed GNN learning task conducted on a billion-scale network in the literature.
arXiv Detail & Related papers (2021-04-21T14:51:33Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.