P-DROP: Poisson-Based Dropout for Graph Neural Networks
- URL: http://arxiv.org/abs/2505.21783v1
- Date: Tue, 27 May 2025 21:31:14 GMT
- Title: P-DROP: Poisson-Based Dropout for Graph Neural Networks
- Authors: Hyunsik Yun,
- Abstract summary: Over-smoothing is a major challenge in Graph Neural Networks (GNNs)<n>We propose a novel node selection strategy based on Poisson processes, introducing but structure-aware updates.<n>We explore two applications of this strategy: as a replacement for dropout-based regularization and as a dynamic subgraph training scheme.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over-smoothing remains a major challenge in Graph Neural Networks (GNNs), where repeated message passing causes node representations to converge and lose discriminative power. To address this, we propose a novel node selection strategy based on Poisson processes, introducing stochastic but structure-aware updates. Specifically, we equip each node with an independent Poisson clock, enabling asynchronous and localized updates that preserve structural diversity. We explore two applications of this strategy: as a replacement for dropout-based regularization and as a dynamic subgraph training scheme. Experimental results on standard benchmarks (Cora, Citeseer, Pubmed) demonstrate that our Poisson-based method yields competitive or improved accuracy compared to traditional Dropout, DropEdge, and DropNode approaches, particularly in later training stages.
Related papers
- BANGS: Game-Theoretic Node Selection for Graph Self-Training [39.70859692050266]
Graph self-training is a semi-supervised learning method that iteratively selects a set of unlabeled data to retrain the underlying graph neural network (GNN) model.
We propose BANGS, a novel framework that unifies the labeling strategy with conditional mutual information as the objective of node selection.
Our approach -- grounded in game theory -- selects nodes in a fashion and provides theoretical guarantees for robustness under noisy objective.
arXiv Detail & Related papers (2024-10-12T03:31:28Z) - Reinforcement Learning for Node Selection in Branch-and-Bound [52.2648997215667]
Current state-of-the-art selectors utilize either hand-crafted ensembles that automatically switch between naive sub-node selectors, or learned node selectors that rely on individual node data.
We propose a novel simulation technique that uses reinforcement learning (RL) while considering the entire tree state, rather than just isolated nodes.
arXiv Detail & Related papers (2023-09-29T19:55:56Z) - Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptive Residual Module [65.81781176362848]
Graph Neural Networks (GNNs) can learn from graph-structured data through neighborhood information aggregation.
As the number of layers increases, node representations become indistinguishable, which is known as over-smoothing.
We propose a textbfPosterior-Sampling-based, Node-distinguish Residual module (PSNR).
arXiv Detail & Related papers (2023-05-09T12:03:42Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Neural Structured Prediction for Inductive Node Classification [29.908759584092167]
This paper studies node classification in the inductive setting, aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs.
We present a new approach called the Structured Proxy Network (SPN), which combines the advantages of both worlds.
arXiv Detail & Related papers (2022-04-15T15:50:27Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Stochastic Graph Recurrent Neural Network [6.656993023468793]
We propose SGRNN, a novel neural architecture that applies latent variables to simultaneously capture evolution in node attributes and topology.
Specifically, deterministic states are separated from states in the iterative process to suppress mutual interference.
Experiments on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2020-09-01T16:14:30Z) - GRADE: Graph Dynamic Embedding [76.85156209917932]
GRADE is a probabilistic model that learns to generate evolving node and community representations by imposing a random walk prior to their trajectories.
Our model also learns node community membership which is updated between time steps via a transition matrix.
Experiments demonstrate GRADE outperforms baselines in dynamic link prediction, shows favourable performance on dynamic community detection, and identifies coherent and interpretable evolving communities.
arXiv Detail & Related papers (2020-07-16T01:17:24Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Hop Sampling: A Simple Regularized Graph Learning for Non-Stationary
Environments [12.251253742049437]
Graph representation learning is gaining popularity in a wide range of applications, such as social networks analysis.
Applying graph neural networks (GNNs) in a real-world application is still challenging due to non-stationary environments.
We present Hop Sampling, a straightforward regularization method that can effectively prevent GNNs from overfishing.
arXiv Detail & Related papers (2020-06-26T10:22:57Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.