Degree-Conscious Spiking Graph for Cross-Domain Adaptation
- URL: http://arxiv.org/abs/2410.06883v4
- Date: Fri, 16 May 2025 12:24:56 GMT
- Title: Degree-Conscious Spiking Graph for Cross-Domain Adaptation
- Authors: Yingxu Wang, Mengzhu Wang, Siwei Liu, Houcheng Su, Nan Yin, James Kwok,
- Abstract summary: Spiking Graph Networks (SGNs) have demonstrated significant potential in graph classification by emulating brain-inspired neural dynamics.<n>In this paper, we first propose the domain adaptation problem in SGNs, and introduce a novel framework named Degree-Consicious Spiking Graph for Cross-Domain Adaptation.
- Score: 9.785526401884155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Graph Networks (SGNs) have demonstrated significant potential in graph classification by emulating brain-inspired neural dynamics to achieve energy-efficient computation. However, existing SGNs are generally constrained to in-distribution scenarios and struggle with distribution shifts. In this paper, we first propose the domain adaptation problem in SGNs, and introduce a novel framework named Degree-Consicious Spiking Graph for Cross-Domain Adaptation. DeSGraDA enhances generalization across domains with three key components. First, we introduce the degree-conscious spiking representation module by adapting spike thresholds based on node degrees, enabling more expressive and structure-aware signal encoding. Then, we perform temporal distribution alignment by adversarially matching membrane potentials between domains, ensuring effective performance under domain shift while preserving energy efficiency. Additionally, we extract consistent predictions across two spaces to create reliable pseudo-labels, effectively leveraging unlabeled data to enhance graph classification performance. Furthermore, we establish the first generalization bound for SGDA, providing theoretical insights into its adaptation performance. Extensive experiments on benchmark datasets validate that DeSGraDA consistently outperforms state-of-the-art methods in both classification accuracy and energy efficiency.
Related papers
- ReDiSC: A Reparameterized Masked Diffusion Model for Scalable Node Classification with Structured Predictions [64.17845687013434]
We propose ReDiSC, a structured diffusion model for structured node classification.<n>We show that ReDiSC achieves superior or highly competitive performance compared to state-of-the-art GNN, label propagation, and diffusion-based baselines.<n> Notably, ReDiSC scales effectively to large-scale datasets on which previous structured diffusion methods fail due to computational constraints.
arXiv Detail & Related papers (2025-07-19T04:46:53Z) - Graph Neural Networks Powered by Encoder Embedding for Improved Node Learning [17.31465642587528]
Graph neural networks (GNNs) have emerged as a powerful framework for a wide range of node-level graph learning tasks.<n>In this paper, we leverage a statistically grounded method, one-hot graph encoder embedding (GEE), to generate high-quality initial node features.<n>We demonstrate its effectiveness through extensive simulations and real-world experiments across both unsupervised and supervised settings.
arXiv Detail & Related papers (2025-07-15T21:01:54Z) - Graph Attention for Heterogeneous Graphs with Positional Encoding [0.0]
Graph Neural Networks (GNNs) have emerged as the de facto standard for modeling graph data.<n>This work benchmarks various GNN architectures to identify the most effective methods for heterogeneous graphs.<n>Our findings reveal that graph attention networks excel in these tasks.
arXiv Detail & Related papers (2025-04-03T18:00:02Z) - Decoupled Graph Energy-based Model for Node Out-of-Distribution Detection on Heterophilic Graphs [61.226857589092]
OOD detection on nodes in graph learning remains underexplored.
GNNSafe adapted energy-based detection to the graph domain with state-of-the-art performance.
We introduce DeGEM, which decomposes the learning process into two parts: a graph encoder that leverages topology information for node representations and an energy head that operates in latent space.
arXiv Detail & Related papers (2025-02-25T07:20:00Z) - Bridging Domain Adaptation and Graph Neural Networks: A Tensor-Based Framework for Effective Label Propagation [23.79865440689265]
Graph Neural Networks (GNNs) have recently become the predominant tools for studying graph data.
Despite state-of-the-art performance on graph classification tasks, GNNs are overwhelmingly trained in a single domain under supervision.
We propose the Label-Propagation Graph Neural Network (LP-TGNN) framework to bridge the gap between graph data and traditional domain adaptation methods.
arXiv Detail & Related papers (2025-02-12T15:36:38Z) - Rank and Align: Towards Effective Source-free Graph Domain Adaptation [16.941755478093153]
Graph neural networks (GNNs) have achieved impressive performance in graph domain adaptation.
However, extensive source graphs could be unavailable in real-world scenarios due to privacy and storage concerns.
We introduce a novel GNN-based approach called Rank and Align (RNA), which ranks graph similarities with spectral seriation for robust semantics learning.
arXiv Detail & Related papers (2024-08-22T08:00:50Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Rethinking Propagation for Unsupervised Graph Domain Adaptation [17.443218657417454]
Unlabelled Graph Domain Adaptation (UGDA) aims to transfer knowledge from a labelled source graph to an unsupervised target graph.
We propose a simple yet effective approach called A2GNN for graph domain adaptation.
arXiv Detail & Related papers (2024-02-08T13:24:57Z) - Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation [108.40945109477886]
We propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment.
Our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
arXiv Detail & Related papers (2024-01-21T09:57:56Z) - Distribution Consistency based Self-Training for Graph Neural Networks
with Sparse Labels [33.89511660654271]
Few-shot node classification poses a significant challenge for Graph Neural Networks (GNNs)
Self-training has emerged as a widely popular framework to leverage the abundance of unlabeled data.
We propose a novel Distribution-Consistent Graph Self-Training framework to identify pseudo-labeled nodes that are both informative and capable of redeeming the distribution discrepancy.
arXiv Detail & Related papers (2024-01-18T22:07:48Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Breaking the Entanglement of Homophily and Heterophily in
Semi-supervised Node Classification [25.831508778029097]
We introduce AMUD, which quantifies the relationship between node profiles and topology from a statistical perspective.
We also propose ADPA as a new directed graph learning paradigm for AMUD.
arXiv Detail & Related papers (2023-12-07T07:54:11Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Semi-supervised Domain Adaptation in Graph Transfer Learning [24.32465362708831]
Unsupervised domain adaptation on graphs aims for knowledge transfer from label-rich source graphs to unlabeled target graphs.
This imposes critical challenges on graph transfer learning due to serious domain shifts and label scarcity.
We propose a method named Semi-supervised Graph Domain Adaptation (SGDA) to address these challenges.
arXiv Detail & Related papers (2023-09-19T17:20:58Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Confidence May Cheat: Self-Training on Graph Neural Networks under
Distribution Shift [39.73304203101909]
Self-training methods have been widely adopted on graphs by labeling high-confidence unlabeled nodes and then adding them to the training step.
We propose a novel Distribution Recovered Graph Self-Training framework (DR- GST), which could recover the distribution of the original labeled dataset.
Both our theoretical analysis and extensive experiments on five benchmark datasets demonstrate the effectiveness of the proposed DR- GST.
arXiv Detail & Related papers (2022-01-27T07:12:27Z) - Source Free Unsupervised Graph Domain Adaptation [60.901775859601685]
Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
arXiv Detail & Related papers (2021-12-02T03:18:18Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.