LightDiC: A Simple yet Effective Approach for Large-scale Digraph
Representation Learning
- URL: http://arxiv.org/abs/2401.11772v2
- Date: Sun, 18 Feb 2024 01:40:24 GMT
- Title: LightDiC: A Simple yet Effective Approach for Large-scale Digraph
Representation Learning
- Authors: Xunkai Li, Meihao Liao, Zhengyu Wu, Daohan Su, Wentao Zhang, Rong-Hua
Li, Guoren Wang
- Abstract summary: We propose LightDiC, a scalable variant of the digraph convolution based on the magnetic Laplacian.
LightDiC is the first DiGNN to provide satisfactory results in the most representative large-scale database.
- Score: 42.72417353512392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing graph neural networks (GNNs) are limited to undirected graphs,
whose restricted scope of the captured relational information hinders their
expressive capabilities and deployments in real-world scenarios. Compared with
undirected graphs, directed graphs (digraphs) fit the demand for modeling more
complex topological systems by capturing more intricate relationships between
nodes, such as formulating transportation and financial networks. While some
directed GNNs have been introduced, their inspiration mainly comes from deep
learning architectures, which lead to redundant complexity and computation,
making them inapplicable to large-scale databases. To address these issues, we
propose LightDiC, a scalable variant of the digraph convolution based on the
magnetic Laplacian. Since topology-related computations are conducted solely
during offline pre-processing, LightDiC achieves exceptional scalability,
enabling downstream predictions to be trained separately without incurring
recursive computational costs. Theoretical analysis shows that LightDiC
utilizes directed information to achieve message passing based on the complex
field, which corresponds to the proximal gradient descent process of the
Dirichlet energy optimization function from the perspective of digraph signal
denoising, ensuring its expressiveness. Experimental results demonstrate that
LightDiC performs comparably well or even outperforms other SOTA methods in
various downstream tasks, with fewer learnable parameters and higher training
efficiency. Notably, LightDiC is the first DiGNN to provide satisfactory
results in the most representative large-scale database (ogbn-papers100M).
Related papers
- Non-Euclidean Hierarchical Representational Learning using Hyperbolic Graph Neural Networks for Environmental Claim Detection [1.3673890873313355]
Transformer-based models dominate NLP tasks like sentiment analysis, machine translation, and claim verification.
In this work, we explore Graph Neural Networks (GNNs) and Hyperbolic Graph Neural Networks (HGNNs) as lightweight yet effective alternatives for Environmental Claim Detection.
arXiv Detail & Related papers (2025-02-19T11:04:59Z) - GraphMinNet: Learning Dependencies in Graphs with Light Complexity Minimal Architecture [12.267920696617017]
This paper introduces GraphMinNet, a novel GNN architecture that generalizes the idea of minimal Gated Recurrent Units to graph-structured data.
Our approach achieves efficient LRD modeling with linear computational complexity.
Our results show superior performance on 6 out of 10 datasets and competitive results on the others.
arXiv Detail & Related papers (2025-02-01T02:46:48Z) - DeltaGNN: Graph Neural Network with Information Flow Control [5.563171090433323]
Graph Neural Networks (GNNs) are designed to process graph-structured data through neighborhood aggregations in the message passing process.
Message-passing enables GNNs to understand short-range spatial interactions, but also causes them to suffer from over-smoothing and over-squashing.
We propose a mechanism called emph information flow control to address over-smoothing and over-squashing with linear computational overhead.
We benchmark our model across 10 real-world datasets, including graphs with varying sizes, topologies, densities, and homophilic ratios, showing superior performance
arXiv Detail & Related papers (2025-01-10T14:34:20Z) - LASE: Learned Adjacency Spectral Embeddings [7.612218105739107]
We learn nodal Adjacency Spectral Embeddings (ASE) from graph inputs.
LASE is interpretable, parameter efficient, robust to inputs with unobserved edges.
LASE layers combine Graph Convolutional Network (GCN) and fully-connected Graph Attention Network (GAT) modules.
arXiv Detail & Related papers (2024-12-23T17:35:19Z) - MassiveGNN: Efficient Training via Prefetching for Massively Connected Distributed Graphs [11.026326555186333]
This paper develops a parameterized continuous prefetch and eviction scheme on top of the state-of-the-art Amazon DistDGL distributed GNN framework.
It demonstrates about 15-40% improvement in end-to-end training performance on the National Energy Research Scientific Computing Center's (NERSC) Perlmutter supercomputer.
arXiv Detail & Related papers (2024-10-30T05:10:38Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.