LightDiC: A Simple yet Effective Approach for Large-scale Digraph
Representation Learning
- URL: http://arxiv.org/abs/2401.11772v2
- Date: Sun, 18 Feb 2024 01:40:24 GMT
- Title: LightDiC: A Simple yet Effective Approach for Large-scale Digraph
Representation Learning
- Authors: Xunkai Li, Meihao Liao, Zhengyu Wu, Daohan Su, Wentao Zhang, Rong-Hua
Li, Guoren Wang
- Abstract summary: We propose LightDiC, a scalable variant of the digraph convolution based on the magnetic Laplacian.
LightDiC is the first DiGNN to provide satisfactory results in the most representative large-scale database.
- Score: 42.72417353512392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing graph neural networks (GNNs) are limited to undirected graphs,
whose restricted scope of the captured relational information hinders their
expressive capabilities and deployments in real-world scenarios. Compared with
undirected graphs, directed graphs (digraphs) fit the demand for modeling more
complex topological systems by capturing more intricate relationships between
nodes, such as formulating transportation and financial networks. While some
directed GNNs have been introduced, their inspiration mainly comes from deep
learning architectures, which lead to redundant complexity and computation,
making them inapplicable to large-scale databases. To address these issues, we
propose LightDiC, a scalable variant of the digraph convolution based on the
magnetic Laplacian. Since topology-related computations are conducted solely
during offline pre-processing, LightDiC achieves exceptional scalability,
enabling downstream predictions to be trained separately without incurring
recursive computational costs. Theoretical analysis shows that LightDiC
utilizes directed information to achieve message passing based on the complex
field, which corresponds to the proximal gradient descent process of the
Dirichlet energy optimization function from the perspective of digraph signal
denoising, ensuring its expressiveness. Experimental results demonstrate that
LightDiC performs comparably well or even outperforms other SOTA methods in
various downstream tasks, with fewer learnable parameters and higher training
efficiency. Notably, LightDiC is the first DiGNN to provide satisfactory
results in the most representative large-scale database (ogbn-papers100M).
Related papers
- MassiveGNN: Efficient Training via Prefetching for Massively Connected Distributed Graphs [11.026326555186333]
This paper develops a parameterized continuous prefetch and eviction scheme on top of the state-of-the-art Amazon DistDGL distributed GNN framework.
It demonstrates about 15-40% improvement in end-to-end training performance on the National Energy Research Scientific Computing Center's (NERSC) Perlmutter supercomputer.
arXiv Detail & Related papers (2024-10-30T05:10:38Z) - DiRW: Path-Aware Digraph Learning for Heterophily [23.498557237805414]
Graph neural network (GNN) has emerged as a powerful representation learning tool for graph-structured data.
We propose Directed Random Walk (DiRW), which can be viewed as a plug-and-play strategy or an innovative neural architecture.
DiRW incorporates a direction-aware path sampler optimized from perspectives of walk probability, length, and number.
arXiv Detail & Related papers (2024-10-14T09:26:56Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Layer-wise training for self-supervised learning on graphs [0.0]
End-to-end training of graph neural networks (GNN) on large graphs presents several memory and computational challenges.
We propose Layer-wise Regularized Graph Infomax, an algorithm to train GNNs layer by layer in a self-supervised manner.
arXiv Detail & Related papers (2023-09-04T10:23:39Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Tackling Oversmoothing of GNNs with Contrastive Learning [35.88575306925201]
Graph neural networks (GNNs) integrate the comprehensive relation of graph data and representation learning capability.
Oversmoothing makes the final representations of nodes indiscriminative, thus deteriorating the node classification and link prediction performance.
We propose the Topology-guided Graph Contrastive Layer, named TGCL, which is the first de-oversmoothing method maintaining all three mentioned metrics.
arXiv Detail & Related papers (2021-10-26T15:56:16Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.