GrokFormer: Graph Fourier Kolmogorov-Arnold Transformers
- URL: http://arxiv.org/abs/2411.17296v3
- Date: Thu, 29 May 2025 04:23:01 GMT
- Title: GrokFormer: Graph Fourier Kolmogorov-Arnold Transformers
- Authors: Guoguo Ai, Guansong Pang, Hezhe Qiao, Yuan Gao, Hui Yan,
- Abstract summary: Graph Transformers (GTs) have demonstrated remarkable performance in graph representation learning over popular graph neural networks (GNNs)<n>However, self-attention, the core module of GTs, preserves only low-frequency signals in graph features, leading to ineffectiveness in capturing other important signals like high-frequency ones.<n>We propose a Graph Fourier Kolmogorov-Arnold Transformer (GrokFormer) that learns highly expressive spectral filters with adaptive graph spectrum and spectral order.
- Score: 21.601090849000247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Transformers (GTs) have demonstrated remarkable performance in graph representation learning over popular graph neural networks (GNNs). However, self--attention, the core module of GTs, preserves only low-frequency signals in graph features, leading to ineffectiveness in capturing other important signals like high-frequency ones. Some recent GT models help alleviate this issue, but their flexibility and expressiveness are still limited since the filters they learn are fixed on predefined graph spectrum or spectral order. To tackle this challenge, we propose a Graph Fourier Kolmogorov-Arnold Transformer (GrokFormer), a novel GT model that learns highly expressive spectral filters with adaptive graph spectrum and spectral order through a Fourier series modeling over learnable activation functions. We demonstrate theoretically and empirically that the proposed GrokFormer filter offers better expressiveness than other spectral methods. Comprehensive experiments on 10 real-world node classification datasets across various domains, scales, and graph properties, as well as 5 graph classification datasets, show that GrokFormer outperforms state-of-the-art GTs and GNNs. Our code is available at https://github.com/GGA23/GrokFormer
Related papers
- Piecewise Constant Spectral Graph Neural Network [5.048196692772085]
Graph Neural Networks (GNNs) have achieved significant success across various domains by leveraging graph structures in data.<n>Existing spectral GNNs, which use low-degree filters to capture spectral properties, may not fully identify the graph's spectral characteristics because of small degree.<n>We introduce the Constant Piecewise Spectral Graph Neural Network(PieCoN) to address these challenges.
arXiv Detail & Related papers (2025-05-07T21:17:06Z) - Tensor-Fused Multi-View Graph Contrastive Learning [12.412040359604163]
Graph contrastive learning (GCL) has emerged as a promising approach to enhance graph neural networks' (GNNs) ability to learn rich representations from unlabeled graph-structured data.
Current GCL models face challenges with computational demands and limited feature utilization.
We propose TensorMV-GCL, a novel framework that integrates extended persistent homology with GCL representations and facilitates multi-scale feature extraction.
arXiv Detail & Related papers (2024-10-20T01:40:12Z) - GrassNet: State Space Model Meets Graph Neural Network [57.62885438406724]
Graph State Space Network (GrassNet) is a novel graph neural network with theoretical support that provides a simple yet effective scheme for designing arbitrary graph spectral filters.
To the best of our knowledge, our work is the first to employ SSMs for the design of graph GNN spectral filters.
Extensive experiments on nine public benchmarks reveal that GrassNet achieves superior performance in real-world graph modeling tasks.
arXiv Detail & Related papers (2024-08-16T07:33:58Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Gradformer: Graph Transformer with Exponential Decay [69.50738015412189]
Self-attention mechanism in Graph Transformers (GTs) overlooks the graph's inductive biases, particularly biases related to structure.
This paper presents Gradformer, a method innovatively integrating GT with the intrinsic inductive bias.
Gradformer consistently outperforms the Graph Neural Network and GT baseline models in various graph classification and regression tasks.
arXiv Detail & Related papers (2024-04-24T08:37:13Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Homophily-Related: Adaptive Hybrid Graph Filter for Multi-View Graph
Clustering [29.17784041837907]
We propose Adaptive Hybrid Graph Filter for Multi-View Graph Clustering (AHGFC)
AHGFC learns the node embedding based on the graph joint aggregation matrix.
Experimental results show that our proposed model performs well on six datasets containing homophilous and heterophilous graphs.
arXiv Detail & Related papers (2024-01-05T07:27:29Z) - Specformer: Spectral Graph Neural Networks Meet Transformers [51.644312964537356]
Spectral graph neural networks (GNNs) learn graph representations via spectral-domain graph convolutions.
We introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain.
By stacking multiple Specformer layers, one can build a powerful spectral GNN.
arXiv Detail & Related papers (2023-03-02T07:36:23Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - PatchGT: Transformer over Non-trainable Clusters for Learning Graph
Representations [18.203910156450085]
We propose a new Transformer-based graph neural network: Patch Graph Transformer (PatchGT)
Unlike previous transformer-based models for learning graph representations, PatchGT learns from non-trainable graph patches, not from nodes directly.
PatchGT achieves higher than 1-WL-type GNNs, and the empirical study shows that PatchGT achieves competitive performances on benchmark datasets.
arXiv Detail & Related papers (2022-11-26T01:17:23Z) - How Powerful are Spectral Graph Neural Networks [9.594432031144715]
Spectral Graph Neural Network is a kind of Graph Neural Network based on graph signal filters.
We first prove that even spectral GNNs without nonlinearity can produce arbitrary graph signals.
We also establish a connection between the expressive power of spectral GNNs and Graph Isomorphism (GI) testing.
arXiv Detail & Related papers (2022-05-23T10:22:12Z) - Simplified Graph Convolution with Heterophily [25.7577503312319]
We show that Simple Graph Convolution (SGC) is ineffective for heterophilous (i.e., non-homophilous) graphs.
We propose Adaptive Simple Graph Convolution (ASGC), which we show can adapt to both homophilous and heterophilous graph structure.
arXiv Detail & Related papers (2022-02-08T20:52:08Z) - Improving Spectral Graph Convolution for Learning Graph-level
Representation [27.76697047602983]
We show that for learning representations of the entire graphs, the topological distance seems necessary since it characterizes the basic relations between nodes.
By removing it, as well as the limitation of graph filters, the resulting new architecture significantly boosts performance on learning graph representations.
It serves as an understanding that quantitatively measures the effects of the spectrum to input signals in comparison to the well-known spectral/low-pass filters.
arXiv Detail & Related papers (2021-12-14T04:50:46Z) - Pointspectrum: Equivariance Meets Laplacian Filtering for Graph
Representation Learning [3.7875603451557063]
Graph Representation Learning (GRL) has become essential for modern graph data mining and learning tasks.
While Graph Neural Networks (GNNs) have been used in state-of-the-art GRL architectures, they have been shown to suffer from over smoothing.
We propose PointSpectrum, a spectral method that incorporates a set equivariant network to account for a graph's structure.
arXiv Detail & Related papers (2021-09-06T10:59:11Z) - Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph
Wavelets [81.63035727821145]
Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning.
We propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets.
arXiv Detail & Related papers (2021-08-03T17:57:53Z) - Beyond Low-pass Filtering: Graph Convolutional Networks with Automatic
Filtering [61.315598419655224]
We propose Automatic Graph Convolutional Networks (AutoGCN) to capture the full spectrum of graph signals.
While it is based on graph spectral theory, our AutoGCN is also localized in space and has a spatial form.
arXiv Detail & Related papers (2021-07-10T04:11:25Z) - Rethinking Graph Transformers with Spectral Attention [13.068288784805901]
We present the $textitSpectral Attention Network$ (SAN), which uses a learned positional encoding (LPE) to learn the position of each node in a given graph.
By leveraging the full spectrum of the Laplacian, our model is theoretically powerful in distinguishing graphs, and can better detect similar sub-structures from their resonance.
Our model performs on par or better than state-of-the-art GNNs, and outperforms any attention-based model by a wide margin.
arXiv Detail & Related papers (2021-06-07T18:11:11Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Stacked Graph Filter [19.343260981528186]
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint.
We find that by stacking graph filters with learnable solution parameters, we can build a highly adaptive and robust graph classification model.
arXiv Detail & Related papers (2020-11-22T11:20:14Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Heterogeneous Graph Transformer [49.675064816860505]
Heterogeneous Graph Transformer (HGT) architecture for modeling Web-scale heterogeneous graphs.
To handle dynamic heterogeneous graphs, we introduce the relative temporal encoding technique into HGT.
To handle Web-scale graph data, we design the heterogeneous mini-batch graph sampling algorithm---HGSampling---for efficient and scalable training.
arXiv Detail & Related papers (2020-03-03T04:49:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.