SVDformer: Direction-Aware Spectral Graph Embedding Learning via SVD and Transformer
- URL: http://arxiv.org/abs/2508.13435v1
- Date: Tue, 19 Aug 2025 01:32:18 GMT
- Title: SVDformer: Direction-Aware Spectral Graph Embedding Learning via SVD and Transformer
- Authors: Jiayu Fang, Zhiqi Shao, S T Boris Choy, Junbin Gao,
- Abstract summary: SVDformer is a novel framework that synergizes SVD and Transformer architecture for direction-aware graph representation learning.<n> experiments on six directed graph benchmarks demonstrate that SVDformer consistently outperforms state-of-the-art GNNs and direction-aware baselines on node classification tasks.
- Score: 24.552037222044504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Directed graphs are widely used to model asymmetric relationships in real-world systems. However, existing directed graph neural networks often struggle to jointly capture directional semantics and global structural patterns due to their isotropic aggregation mechanisms and localized filtering mechanisms. To address this limitation, this paper proposes SVDformer, a novel framework that synergizes SVD and Transformer architecture for direction-aware graph representation learning. SVDformer first refines singular value embeddings through multi-head self-attention, adaptively enhancing critical spectral components while suppressing high-frequency noise. This enables learnable low-pass/high-pass graph filtering without requiring spectral kernels. Furthermore, by treating singular vectors as directional projection bases and singular values as scaling factors, SVDformer uses the Transformer to model multi-scale interactions between incoming/outgoing edge patterns through attention weights, thereby explicitly preserving edge directionality during feature propagation. Extensive experiments on six directed graph benchmarks demonstrate that SVDformer consistently outperforms state-of-the-art GNNs and direction-aware baselines on node classification tasks, establishing a new paradigm for learning representations on directed graphs.
Related papers
- Evolutionary Router Feature Generation for Zero-Shot Graph Anomaly Detection with Mixture-of-Experts [60.60414602796664]
We propose a novel MoE framework with evolutionary router feature generation (EvoFG) for zero-shot GAD.<n>EvoFG consistently outperforms state-of-the-art baselines, achieving strong and stable zero-shot GAD performance.
arXiv Detail & Related papers (2026-02-12T06:16:51Z) - Plain Transformers are Surprisingly Powerful Link Predictors [57.01966734467712]
Link prediction is a core challenge in graph machine learning, demanding models that capture rich and complex topological dependencies.<n>While Graph Neural Networks (GNNs) are the standard solution, state-of-the-art pipelines often rely on explicit structurals or memory-intensive node embeddings.<n>We present PENCIL, an encoder-only plain Transformer that replaces hand-crafted priors with attention over sampled local subgraphs.
arXiv Detail & Related papers (2026-02-02T02:45:52Z) - Generalized Graph Transformer Variational Autoencoder [0.0]
We propose the Generalized Graph Transformer Variational Autoencoder (GGT-VAE)<n>Our model integrates Generalized Graph Transformer Architecture with Variational Autoencoder framework for link prediction.<n> Experimental results on several benchmark datasets demonstrate that GGT-VAE consistently achieves above-baseline performance.
arXiv Detail & Related papers (2025-11-29T19:53:44Z) - HeSRN: Representation Learning On Heterogeneous Graphs via Slot-Aware Retentive Network [22.60005673964228]
HeSRN is a novel Heterogeneous Slot-aware Retentive Network for efficient and expressive heterogeneous graph representation learning.<n>HeSRN consistently outperforms state-of-the-art heterogeneous graph neural networks and Graph Transformer baselines on node classification tasks.
arXiv Detail & Related papers (2025-10-10T18:18:06Z) - Self-Supervised Graph Learning via Spectral Bootstrapping and Laplacian-Based Augmentations [1.0377683220196872]
We present LaplaceGNN, a novel self-supervised graph learning framework.<n>Our method integrates Laplacian-based signals into the learning process.<n>LaplaceGNN achieves superior performance compared to state-of-the-art self-supervised graph methods.
arXiv Detail & Related papers (2025-06-25T12:23:23Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - S^2Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR [50.435592120607815]
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR)
Previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection.
In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR.
arXiv Detail & Related papers (2024-02-22T11:40:49Z) - Graph Transformers without Positional Encodings [0.7252027234425334]
We introduce Eigenformer, a Graph Transformer employing a novel spectrum-aware attention mechanism cognizant of the Laplacian spectrum of the graph.
We empirically show that it achieves performance competetive with SOTA Graph Transformers on a number of standard GNN benchmarks.
arXiv Detail & Related papers (2024-01-31T12:33:31Z) - Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias [72.33336385797944]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias.<n>We show that LD significantly outperforms state-of-the-art methods on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Transformers over Directed Acyclic Graphs [6.263470141349622]
We study transformers over directed acyclic graphs (DAGs) and propose architecture adaptations tailored to DAGs.
We show that it is effective in making graph transformers generally outperform graph neural networks tailored to DAGs and in improving SOTA graph transformer performance in terms of both quality and efficiency.
arXiv Detail & Related papers (2022-10-24T12:04:52Z) - Spectral Transform Forms Scalable Transformer [1.19071399645846]
This work learns from the philosophy of self-attention and proposes an efficient spectral-based neural unit that employs informative long-range temporal interaction.
The developed spectral window unit (SW) model predicts scalable dynamic graphs with assured efficiency.
arXiv Detail & Related papers (2021-11-15T08:46:01Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.