AdvSynGNN: Structure-Adaptive Graph Neural Nets via Adversarial Synthesis and Self-Corrective Propagation
- URL: http://arxiv.org/abs/2602.17071v1
- Date: Thu, 19 Feb 2026 04:26:57 GMT
- Title: AdvSynGNN: Structure-Adaptive Graph Neural Nets via Adversarial Synthesis and Self-Corrective Propagation
- Authors: Rong Fu, Muge Qi, Chunlei Meng, Shuo Yin, Kun Liu, Zhaolu Kang, Simon Fong,
- Abstract summary: Graph neural networks frequently encounter significant performance degradation when confronted with structural noise or non-homophilous topologies.<n>We present AdvSynGNN, a comprehensive architecture designed for resilient node-level representation learning.
- Score: 8.765438402697892
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks frequently encounter significant performance degradation when confronted with structural noise or non-homophilous topologies. To address these systemic vulnerabilities, we present AdvSynGNN, a comprehensive architecture designed for resilient node-level representation learning. The proposed framework orchestrates multi-resolution structural synthesis alongside contrastive objectives to establish geometry-sensitive initializations. We develop a transformer backbone that adaptively accommodates heterophily by modulating attention mechanisms through learned topological signals. Central to our contribution is an integrated adversarial propagation engine, where a generative component identifies potential connectivity alterations while a discriminator enforces global coherence. Furthermore, label refinement is achieved through a residual correction scheme guided by per-node confidence metrics, which facilitates precise control over iterative stability. Empirical evaluations demonstrate that this synergistic approach effectively optimizes predictive accuracy across diverse graph distributions while maintaining computational efficiency. The study concludes with practical implementation protocols to ensure the robust deployment of the AdvSynGNN system in large-scale environments.
Related papers
- LinkD: AutoRegressive Diffusion Model for Mechanical Linkage Synthesis [11.69314618713792]
We introduce an autoregressive diffusion framework that exploits the dyadic nature of linkage assembly.<n>We demonstrate successful synthesis of linkage systems containing up to 20 nodes with synthesis to N-node architectures.
arXiv Detail & Related papers (2026-01-07T16:19:11Z) - Dynamical Learning in Deep Asymmetric Recurrent Neural Networks [1.3421746809394772]
We show that asymmetric deep recurrent neural networks give rise to an exponentially large, dense accessible manifold of internal representations.<n>We propose a distributed learning scheme in which input-output associations emerge naturally from the recurrent dynamics.
arXiv Detail & Related papers (2025-09-05T12:05:09Z) - Power Grid Control with Graph-Based Distributed Reinforcement Learning [60.49805771047161]
This work advances a graph-based distributed reinforcement learning framework for real-time, scalable grid management.<n>A Graph Neural Network (GNN) is employed to encode the network's topological information within the single low-level agent's observation.<n>Experiments on the Grid2Op simulation environment show the effectiveness of the approach.
arXiv Detail & Related papers (2025-09-02T22:17:25Z) - Parameter-Free Structural-Diversity Message Passing for Graph Neural Networks [8.462209415744098]
Graph Neural Networks (GNNs) have shown remarkable performance in structured data modeling tasks such as node classification.<n>This paper proposes a parameter-free graph neural network framework based on structural diversity.<n>The framework is inspired by structural diversity theory and designs a unified structural-diversity message passing mechanism.
arXiv Detail & Related papers (2025-08-27T13:42:45Z) - Recurrent Stochastic Configuration Networks with Incremental Blocks [0.0]
Recurrent configuration networks (RSCNs) have shown promise in modelling nonlinear dynamic systems with order uncertainty.
This paper develops the original RSCNs with block increments, termed block RSCNs (BRSCNs)
BRSCNs can simultaneously add multiple reservoir nodes (subreservoirs) during the construction.
arXiv Detail & Related papers (2024-11-18T05:58:47Z) - Matcha: Mitigating Graph Structure Shifts with Test-Time Adaptation [66.40525136929398]
Test-time adaptation (TTA) has attracted attention due to its ability to adapt a pre-trained model to a target domain, without re-accessing the source domain.<n>We propose Matcha, an innovative framework designed for effective and efficient adaptation to structure shifts in graphs.<n>We validate the effectiveness of Matcha on both synthetic and real-world datasets, demonstrating its robustness across various combinations of structure and attribute shifts.
arXiv Detail & Related papers (2024-10-09T15:15:40Z) - Efficient Graph Optimization via Distance-Aware Graph Representation Learning [5.216774377033164]
We propose textbfDRTR, a graph optimization framework that integrates distance-aware multi-hop message passing with dynamic topology refinement.<n>DRTR leverages both static preprocessing and dynamic resampling to capture deeper structural dependencies.
arXiv Detail & Related papers (2024-06-25T05:12:51Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - DR-Label: Improving GNN Models for Catalysis Systems by Label
Deconstruction and Reconstruction [72.20024514713633]
We present a novel graph neural network (GNN) supervision and prediction strategy DR-Label.
The strategy enhances the supervision signal, reduces the multiplicity of solutions in edge representation, and encourages the model to provide node predictions robust.
DR-Label was applied to three radically distinct models, each of which displayed consistent performance enhancements.
arXiv Detail & Related papers (2023-03-06T04:01:28Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - Adversarial Graph Disentanglement [47.27978741175575]
A real-world graph has a complex topological structure, which is often formed by the interaction of different latent factors.
We propose an underlinetextbfAdversarial underlinetextbfDisentangled underlinetextbfGraph underlinetextbfConvolutional underlinetextbfNetwork (ADGCN) for disentangled graph representation learning.
arXiv Detail & Related papers (2021-03-12T14:11:36Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.