A Deep Graph Neural Networks Architecture Design: From Global
Pyramid-like Shrinkage Skeleton to Local Topology Link Rewiring
- URL: http://arxiv.org/abs/2012.08717v1
- Date: Wed, 16 Dec 2020 03:14:31 GMT
- Title: A Deep Graph Neural Networks Architecture Design: From Global
Pyramid-like Shrinkage Skeleton to Local Topology Link Rewiring
- Authors: Gege Zhang
- Abstract summary: We propose a three-pipeline training framework based on critical expressivity, including global model contraction, weight evolution, and link's weight rewiring.
We analyze the reason for the modularity (clustering) phenomenon in network topology and use it to rewire potential erroneous weighted links.
The architecture design on GNNs, in turn, verifies the expressivity of GNNs from dynamics and topological space aspects.
- Score: 1.455240131708017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Expressivity plays a fundamental role in evaluating deep neural networks, and
it is closely related to understanding the limit of performance improvement. In
this paper, we propose a three-pipeline training framework based on critical
expressivity, including global model contraction, weight evolution, and link's
weight rewiring. Specifically, we propose a pyramidal-like skeleton to overcome
the saddle points that affect information transfer. Then we analyze the reason
for the modularity (clustering) phenomenon in network topology and use it to
rewire potential erroneous weighted links. We conduct numerical experiments on
node classification and the results confirm that the proposed training
framework leads to a significantly improved performance in terms of fast
convergence and robustness to potential erroneous weighted links. The
architecture design on GNNs, in turn, verifies the expressivity of GNNs from
dynamics and topological space aspects and provides useful guidelines in
designing more efficient neural networks.
Related papers
- Advanced Financial Fraud Detection Using GNN-CL Model [13.5240775562349]
The innovative GNN-CL model proposed in this paper marks a breakthrough in the field of financial fraud detection.
It combines the advantages of graph neural networks (gnn), convolutional neural networks (cnn) and long short-term memory (LSTM) networks.
A key novelty of this paper is the use of multilayer perceptrons (MLPS) to estimate node similarity.
arXiv Detail & Related papers (2024-07-09T03:59:06Z) - Position-aware Structure Learning for Graph Topology-imbalance by
Relieving Under-reaching and Over-squashing [67.83086131278904]
Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes.
We propose a novel position-aware graph structure learning framework named PASTEL.
Our key insight is to enhance the connectivity of nodes within the same class for more supervision information.
arXiv Detail & Related papers (2022-08-17T14:04:21Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Characterizing Learning Dynamics of Deep Neural Networks via Complex
Networks [1.0869257688521987]
Complex Network Theory (CNT) represents Deep Neural Networks (DNNs) as directed weighted graphs to study them as dynamical systems.
We introduce metrics for nodes/neurons and layers, namely Nodes Strength and Layers Fluctuation.
Our framework distills trends in the learning dynamics and separates low from high accurate networks.
arXiv Detail & Related papers (2021-10-06T10:03:32Z) - Curvature Graph Neural Network [8.477559786537919]
We introduce discrete graph curvature (the Ricci curvature) to quantify the strength of structural connection of pairwise nodes.
We propose Curvature Graph Neural Network (CGNN), which effectively improves the adaptive locality ability of GNNs.
The experimental results on synthetic datasets show that CGNN effectively exploits the topology structure information.
arXiv Detail & Related papers (2021-06-30T00:56:03Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.