Toward Robust Signed Graph Learning through Joint Input-Target Denoising
- URL: http://arxiv.org/abs/2510.22513v1
- Date: Sun, 26 Oct 2025 03:34:40 GMT
- Title: Toward Robust Signed Graph Learning through Joint Input-Target Denoising
- Authors: Junran Wu, Beng Chin Ooi, Ke Xu,
- Abstract summary: Signed Graph Neural Networks (SGNNs) are widely adopted to analyze complex patterns in signed graphs with both positive and negative links.<n>We propose RIDGE, a novel framework for Robust graph learning through joint Denoising of Graph inputs and supervision targEts.<n>We extensively validate our method on four prevalent signed graph datasets, and the results show that RIDGE clearly improves the robustness of popular SGNN models under various levels of noise.
- Score: 20.15917072156998
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Signed Graph Neural Networks (SGNNs) are widely adopted to analyze complex patterns in signed graphs with both positive and negative links. Given the noisy nature of real-world connections, the robustness of SGNN has also emerged as a pivotal research area. Under the supervision of empirical properties, graph structure learning has shown its robustness on signed graph representation learning, however, there remains a paucity of research investigating a robust SGNN with theoretical guidance. Inspired by the success of graph information bottleneck (GIB) in information extraction, we propose RIDGE, a novel framework for Robust sI gned graph learning through joint Denoising of Graph inputs and supervision targEts. Different from the basic GIB, we extend the GIB theory with the capability of target space denoising as the co-existence of noise in both input and target spaces. In instantiation, RIDGE effectively cleanses input data and supervision targets via a tractable objective function produced by reparameterization mechanism and variational approximation. We extensively validate our method on four prevalent signed graph datasets, and the results show that RIDGE clearly improves the robustness of popular SGNN models under various levels of noise.
Related papers
- Cross-View Topology-Aware Graph Representation Learning [0.0]
We propose GraphTCL, a dual-view contrastive learning framework that integrates structural embeddings from GNNs with topological embeddings derived from persistent homology.<n>Experiments on benchmark datasets, including TU and OGB molecular graphs, demonstrate that GraphTCL consistently outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2025-12-01T19:00:58Z) - Dual-Kernel Graph Community Contrastive Learning [14.92920991249099]
Graph Contrastive Learning (GCL) has emerged as a powerful paradigm for training Graph Neural Networks (GNNs)<n>We propose an efficient GCL framework that transforms the input graph into a compact network of interconnected node sets.<n>Our method outperforms state-of-the-art GCL baselines in both effectiveness and scalability.
arXiv Detail & Related papers (2025-11-11T14:20:39Z) - Self-supervised Subgraph Neural Network With Deep Reinforcement Walk Exploration [13.489730726871421]
Graph data represents complex real-world phenomena like chemical compounds, protein structures, and social networks.<n>Traditional Graph Neural Networks (GNNs) primarily utilize the message-passing mechanism, but their expressive power is limited and their prediction lacks explainability.<n>Subgraph neural networks (SGNNs) and GNN explainers have emerged as potential solutions, but each has its limitations.<n>We propose a novel framework that integrates SGNNs with the generation approach of GNN explainers, named the Reinforcement Walk Exploration SGNN (RWE-SGNN)
arXiv Detail & Related papers (2025-02-03T20:40:33Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Combating Bilateral Edge Noise for Robust Link Prediction [56.43882298843564]
We propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
Two instantiations, RGIB-SSL and RGIB-REP, are explored to leverage the merits of different methodologies.
Experiments on six datasets and three GNNs with diverse noisy scenarios verify the effectiveness of our RGIB instantiations.
arXiv Detail & Related papers (2023-11-02T12:47:49Z) - ALEX: Towards Effective Graph Transfer Learning with Noisy Labels [11.115297917940829]
We introduce a novel technique termed Balance Alignment and Information-aware Examination (ALEX) to address the problem of graph transfer learning.
ALEX first employs singular value decomposition to generate different views with crucial structural semantics, which help provide robust node representations.
Building on this foundation, an adversarial domain discriminator is incorporated for the implicit domain alignment of complex multi-modal distributions.
arXiv Detail & Related papers (2023-09-26T04:59:49Z) - Robust Graph Structure Learning with the Alignment of Features and
Adjacency Matrix [8.711977569042865]
Many approaches have been proposed for graph structure learning (GSL) to jointly learn a clean graph structure and corresponding representations.
This paper proposes a novel regularized GSL approach, particularly with an alignment of feature information and graph information.
We conduct experiments on real-world graphs to evaluate the effectiveness of our approach.
arXiv Detail & Related papers (2023-07-05T09:05:14Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.