Rethinking Message Passing Neural Networks with Diffusion Distance-guided Stress Majorization
- URL: http://arxiv.org/abs/2511.19984v1
- Date: Tue, 25 Nov 2025 06:52:35 GMT
- Title: Rethinking Message Passing Neural Networks with Diffusion Distance-guided Stress Majorization
- Authors: Haoran Zheng, Renchi Yang, Yubo Zhou, Jianliang Xu,
- Abstract summary: Message passing neural networks (MPNNs) have emerged as go-to models for learning on graphstructured data.<n>MPNNs still incur severe issues such as over-smoothing and -correlation.<n>We propose the new MPNN model built on an optimization framework that includes the stress majorization and regularization.
- Score: 22.36875245255393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Message passing neural networks (MPNNs) have emerged as go-to models for learning on graph-structured data in the past decade. Despite their effectiveness, most of such models still incur severe issues such as over-smoothing and -correlation, due to their underlying objective of minimizing the Dirichlet energy and the derived neighborhood aggregation operations. In this paper, we propose the DDSM, a new MPNN model built on an optimization framework that includes the stress majorization and orthogonal regularization for overcoming the above issues. Further, we introduce the diffusion distances for nodes into the framework to guide the new message passing operations and develop efficient algorithms for distance approximations, both backed by rigorous theoretical analyses. Our comprehensive experiments showcase that DDSM consistently and considerably outperforms 15 strong baselines on both homophilic and heterophilic graphs.
Related papers
- NNDM: NN_UNet Diffusion Model for Brain Tumor Segmentation [0.0]
We propose NNDM (NN_UNet Diffusion Model) a hybrid framework that integrates the robust feature extraction of NN-UNet with the generative capabilities of diffusion probabilistic models.<n>In our approach, the diffusion model progressively refines the segmentation masks generated by NN-UNet by learning the residual error distribution between predicted and ground-truth masks.<n>Experiments conducted on the BraTS 2021 datasets demonstrate that NNDM achieves superior performance compared to conventional U-Net and transformer-based baselines.
arXiv Detail & Related papers (2025-10-08T22:19:08Z) - ReDiSC: A Reparameterized Masked Diffusion Model for Scalable Node Classification with Structured Predictions [64.17845687013434]
We propose ReDiSC, a structured diffusion model for structured node classification.<n>We show that ReDiSC achieves superior or highly competitive performance compared to state-of-the-art GNN, label propagation, and diffusion-based baselines.<n> Notably, ReDiSC scales effectively to large-scale datasets on which previous structured diffusion methods fail due to computational constraints.
arXiv Detail & Related papers (2025-07-19T04:46:53Z) - Integrating Intermediate Layer Optimization and Projected Gradient Descent for Solving Inverse Problems with Diffusion Models [19.445391508424667]
Inverse problems (IPs) involve reconstructing signals from noisy observations.<n>DMs have emerged as a powerful framework for solving IPs, achieving remarkable reconstruction performance.<n>Existing DM-based methods frequently encounter issues such as heavy computational demands and suboptimal convergence.<n>We propose two novel methods, DMILO and DMILO-PGD, to address these challenges.
arXiv Detail & Related papers (2025-05-27T06:49:02Z) - Data-Driven and Theory-Guided Pseudo-Spectral Seismic Imaging Using Deep Neural Network Architectures [0.0]
Full Waveform Inversion (FWI) reconstructs high-resolution subsurface models.<n>FWI faces challenges with solver selection and data availability.<n>Deep Learning (DL) offers a promising alternative, bridging data-driven and physics-based methods.<n>This thesis integrates pseudo-spectral FWI into DL, formulating both data-driven and theory-guided approaches.
arXiv Detail & Related papers (2025-02-26T05:46:53Z) - Residual-based attention and connection to information bottleneck theory
in PINNs [0.393259574660092]
Physics-informed neural networks (PINNs) have seen a surge of interest in recent years.
We propose an efficient, gradient-less weighting scheme for PINNs, that accelerates the convergence of dynamic or static systems.
arXiv Detail & Related papers (2023-07-01T16:29:55Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - Mixed Graph Contrastive Network for Semi-Supervised Node Classification [63.924129159538076]
We propose a novel graph contrastive learning method, termed Mixed Graph Contrastive Network (MGCN)<n>In our method, we improve the discriminative capability of the latent embeddings by an unperturbed augmentation strategy and a correlation reduction mechanism.<n>By combining the two settings, we extract rich supervision information from both the abundant nodes and the rare yet valuable labeled nodes for discriminative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.