Partition-wise Graph Filtering: A Unified Perspective Through the Lens of Graph Coarsening
- URL: http://arxiv.org/abs/2505.14033v2
- Date: Thu, 22 May 2025 05:47:24 GMT
- Title: Partition-wise Graph Filtering: A Unified Perspective Through the Lens of Graph Coarsening
- Authors: Guoming Li, Jian Yang, Yifan Chen,
- Abstract summary: This paper introduces Coarsening-guided Partition-wise Filtering (CPF)<n>CPF innovates by performing filtering on node partitions.<n>In-depth analysis is conducted for each phase of CPF, showing its superiority over other paradigms.
- Score: 15.198274855557132
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Filtering-based graph neural networks (GNNs) constitute a distinct class of GNNs that employ graph filters to handle graph-structured data, achieving notable success in various graph-related tasks. Conventional methods adopt a graph-wise filtering paradigm, imposing a uniform filter across all nodes, yet recent findings suggest that this rigid paradigm struggles with heterophilic graphs. To overcome this, recent works have introduced node-wise filtering, which assigns distinct filters to individual nodes, offering enhanced adaptability. However, a fundamental gap remains: a comprehensive framework unifying these two strategies is still absent, limiting theoretical insights into the filtering paradigms. Moreover, through the lens of Contextual Stochastic Block Model, we reveal that a synthesis of graph-wise and node-wise filtering provides a sufficient solution for classification on graphs exhibiting both homophily and heterophily, suggesting the risk of excessive parameterization and potential overfitting with node-wise filtering. To address the limitations, this paper introduces Coarsening-guided Partition-wise Filtering (CPF). CPF innovates by performing filtering on node partitions. The method begins with structure-aware partition-wise filtering, which filters node partitions obtained via graph coarsening algorithms, and then performs feature-aware partition-wise filtering, refining node embeddings via filtering on clusters produced by $k$-means clustering over features. In-depth analysis is conducted for each phase of CPF, showing its superiority over other paradigms. Finally, benchmark node classification experiments, along with a real-world graph anomaly detection application, validate CPF's efficacy and practical utility.
Related papers
- Towards Anomaly-Aware Pre-Training and Fine-Tuning for Graph Anomaly Detection [59.042018542376596]
Graph anomaly detection (GAD) has garnered increasing attention in recent years, yet remains challenging due to two key factors.<n>Anomaly-Aware Pre-Training and Fine-Tuning (APF) is a framework to mitigate the challenges in GAD.<n> Comprehensive experiments on 10 benchmark datasets validate the superior performance of APF in comparison to state-of-the-art baselines.
arXiv Detail & Related papers (2025-04-19T09:57:35Z) - Dual-Frequency Filtering Self-aware Graph Neural Networks for Homophilic and Heterophilic Graphs [60.82508765185161]
We propose Dual-Frequency Filtering Self-aware Graph Neural Networks (DFGNN)
DFGNN integrates low-pass and high-pass filters to extract smooth and detailed topological features.
It dynamically adjusts filtering ratios to accommodate both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2024-11-18T04:57:05Z) - Generalized Learning of Coefficients in Spectral Graph Convolutional Networks [5.5711773076846365]
Spectral Graph Convolutional Networks (GCNs) have gained popularity in graph machine learning applications.
G-Arnoldi-GCN consistently outperforms state-of-the-art methods when suitable functions are employed.
arXiv Detail & Related papers (2024-09-07T12:53:44Z) - Node-wise Filtering in Graph Neural Networks: A Mixture of Experts Approach [58.8524608686851]
Graph Neural Networks (GNNs) have proven to be highly effective for node classification tasks across diverse graph structural patterns.
Traditionally, GNNs employ a uniform global filter, typically a low-pass filter for homophilic graphs and a high-pass filter for heterophilic graphs.
We introduce a novel GNN framework Node-MoE that utilizes a mixture of experts to adaptively select the appropriate filters for different nodes.
arXiv Detail & Related papers (2024-06-05T17:12:38Z) - Graph Contrastive Learning under Heterophily via Graph Filters [51.46061703680498]
Graph contrastive learning (CL) methods learn node representations in a self-supervised manner by maximizing the similarity between the augmented node representations obtained via a GNN-based encoder.
In this work, we propose an effective graph CL method, namely HLCL, for learning graph representations under heterophily.
Our extensive experiments show that HLCL outperforms state-of-the-art graph CL methods on benchmark datasets with heterophily, as well as large-scale real-world graphs, by up to 7%, and outperforms graph supervised learning methods on datasets with heterophily by up to 10%.
arXiv Detail & Related papers (2023-03-11T08:32:39Z) - Node-oriented Spectral Filtering for Graph Neural Networks [38.0315325181726]
Graph neural networks (GNNs) have shown remarkable performance on homophilic graph data.
In general, learning a universal spectral filter on the graph from the global perspective may still suffer from great difficulty in adapting to the variation of local patterns.
We propose the node-oriented spectral filtering for graph neural network (namely NFGNN)
arXiv Detail & Related papers (2022-12-07T14:15:28Z) - Learning Optimal Graph Filters for Clustering of Attributed Graphs [20.810096547938166]
Many real-world systems can be represented as graphs where the different entities in the system are presented by nodes and their interactions by edges.
An important task in studying large datasets with graphical structure is graph clustering.
We introduce a graph signal processing based approach, where we learn the parameters of Finite Impulse Response (FIR) and Autoregressive Moving Average (ARMA) graph filters optimized for clustering.
arXiv Detail & Related papers (2022-11-09T01:49:23Z) - Message Passing in Graph Convolution Networks via Adaptive Filter Banks [81.12823274576274]
We present a novel graph convolution operator, termed BankGCN.
It decomposes multi-channel signals on graphs into subspaces and handles particular information in each subspace with an adapted filter.
It achieves excellent performance in graph classification on a collection of benchmark graph datasets.
arXiv Detail & Related papers (2021-06-18T04:23:34Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z) - Complete the Missing Half: Augmenting Aggregation Filtering with
Diversification for Graph Convolutional Networks [46.14626839260314]
We show that current Graph Neural Networks (GNNs) are potentially a problematic factor underlying all GNN methods for learning on certain datasets.
We augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity.
Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations.
In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
arXiv Detail & Related papers (2020-08-20T08:45:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.