Sparse Bayesian Message Passing under Structural Uncertainty
- URL: http://arxiv.org/abs/2601.01207v1
- Date: Sat, 03 Jan 2026 15:16:12 GMT
- Title: Sparse Bayesian Message Passing under Structural Uncertainty
- Authors: Yoonhyuk Choi, Jiho Choi, Chanran Kim, Yumin Lee, Hawon Shin, Yeowon Jeon, Minjeong Kim, Jiwoo Kang,
- Abstract summary: Semi-supervised learning on real-world graphs is frequently challenged by heterophily, where the observed graph is unreliable or label-disassortative.<n>Many existing graph neural networks either rely on a fixed adjacency structure or attempt to handle structural noise through regularization.<n>We explicitly capture structural uncertainty by modeling a posterior distribution over signed adjacency matrices, allowing each edge to be positive, negative, or absent.
- Score: 12.402205032785497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning on real-world graphs is frequently challenged by heterophily, where the observed graph is unreliable or label-disassortative. Many existing graph neural networks either rely on a fixed adjacency structure or attempt to handle structural noise through regularization. In this work, we explicitly capture structural uncertainty by modeling a posterior distribution over signed adjacency matrices, allowing each edge to be positive, negative, or absent. We propose a sparse signed message passing network that is naturally robust to edge noise and heterophily, which can be interpreted from a Bayesian perspective. By combining (i) posterior marginalization over signed graph structures with (ii) sparse signed message aggregation, our approach offers a principled way to handle both edge noise and heterophily. Experimental results demonstrate that our method outperforms strong baseline models on heterophilic benchmarks under both synthetic and real-world structural noise.
Related papers
- Improving LLM Reasoning with Homophily-aware Structural and Semantic Text-Attributed Graph Compression [55.51959317490934]
Large language models (LLMs) have demonstrated promising capabilities in Text-Attributed Graph (TAG) understanding.<n>We argue that graphs inherently contain rich structural and semantic information, and that their effective exploitation can unlock potential gains in LLMs reasoning performance.<n>We propose Homophily-aware Structural and Semantic Compression for LLMs (HS2C), a framework centered on exploiting graph homophily.
arXiv Detail & Related papers (2026-01-13T03:35:18Z) - DAG DECORation: Continuous Optimization for Structure Learning under Hidden Confounding [0.0]
We study structure learning for linear Gaussian SEMs in the presence of latent confounding.<n>We propose textscDECOR, a single likelihood-based estimator that jointly learns a DAG and a correlated noise model.
arXiv Detail & Related papers (2025-10-02T15:23:30Z) - A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs [54.62268052283014]
We present a unified theoretical perspective based on the framework of signed graphs.<n>We show that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing.<n>We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Combating Bilateral Edge Noise for Robust Link Prediction [56.43882298843564]
We propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
Two instantiations, RGIB-SSL and RGIB-REP, are explored to leverage the merits of different methodologies.
Experiments on six datasets and three GNNs with diverse noisy scenarios verify the effectiveness of our RGIB instantiations.
arXiv Detail & Related papers (2023-11-02T12:47:49Z) - Structure-Aware Robustness Certificates for Graph Classification [13.643197515573029]
Certifying the robustness of a graph-based machine learning model poses a critical challenge for safety.
We develop a randomised smoothing method based on adding an anisotropic noise distribution to the input graph structure.
We show that our process generates structural-aware certificates for our classifiers, whereby the magnitude of robustness certificates can vary across different pre-defined structures of the graph.
arXiv Detail & Related papers (2023-06-20T22:01:37Z) - Heterophily-Aware Graph Attention Network [42.640057865981156]
Graph Neural Networks (GNNs) have shown remarkable success in graph representation learning.
Existing heterophilic GNNs tend to ignore the modeling of heterophily of each edge, which is also a vital part in tackling the heterophily problem.
We propose a novel Heterophily-Aware Graph Attention Network (HA-GAT) by fully exploring and utilizing the local distribution as the underlying heterophily.
arXiv Detail & Related papers (2023-02-07T03:21:55Z) - Alleviating neighbor bias: augmenting graph self-supervise learning with
structural equivalent positive samples [1.0507062889290775]
We propose a signal-driven self-supervised method for graph representation learning.
It uses a topological information-guided structural equivalence sampling strategy.
The results show that the model performance can be effectively improved.
arXiv Detail & Related papers (2022-12-08T16:04:06Z) - Local Graph-homomorphic Processing for Privatized Distributed Systems [57.14673504239551]
We show that the added noise does not affect the performance of the learned model.
This is a significant improvement to previous works on differential privacy for distributed algorithms.
arXiv Detail & Related papers (2022-10-26T10:00:14Z) - Semi-Supervised Clustering of Sparse Graphs: Crossing the
Information-Theoretic Threshold [3.6052935394000234]
Block model is a canonical random graph model for clustering and community detection on network-structured data.
No estimator based on the network topology can perform substantially better than chance on sparse graphs if the model parameter is below a certain threshold.
We prove that with an arbitrary fraction of the labels feasible throughout the parameter domain.
arXiv Detail & Related papers (2022-05-24T00:03:25Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Learning Node Representations from Noisy Graph Structures [38.32421350245066]
Noises prevail in real-world networks, which compromise networks to a large extent.
We propose a novel framework to learn noise-free node representations and eliminate noises simultaneously.
arXiv Detail & Related papers (2020-12-04T07:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.