Layer-diverse Negative Sampling for Graph Neural Networks
- URL: http://arxiv.org/abs/2403.11408v1
- Date: Mon, 18 Mar 2024 01:48:50 GMT
- Title: Layer-diverse Negative Sampling for Graph Neural Networks
- Authors: Wei Duan, Jie Lu, Yu Guang Wang, Junyu Xuan,
- Abstract summary: Graph neural networks (GNNs) are a powerful solution for various structure learning applications.
We propose a layer-diverse negative sampling method for message-passing propagation.
We demonstrate the effectiveness of our approach in improving the diversity of negative samples and overall learning performance.
- Score: 19.84639875144519
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph neural networks (GNNs) are a powerful solution for various structure learning applications due to their strong representation capabilities for graph data. However, traditional GNNs, relying on message-passing mechanisms that gather information exclusively from first-order neighbours (known as positive samples), can lead to issues such as over-smoothing and over-squashing. To mitigate these issues, we propose a layer-diverse negative sampling method for message-passing propagation. This method employs a sampling matrix within a determinantal point process, which transforms the candidate set into a space and selectively samples from this space to generate negative samples. To further enhance the diversity of the negative samples during each forward pass, we develop a space-squeezing method to achieve layer-wise diversity in multi-layer GNNs. Experiments on various real-world graph datasets demonstrate the effectiveness of our approach in improving the diversity of negative samples and overall learning performance. Moreover, adding negative samples dynamically changes the graph's topology, thus with the strong potential to improve the expressiveness of GNNs and reduce the risk of over-squashing.
Related papers
- Diffusion-based Negative Sampling on Graphs for Link Prediction [8.691564173331924]
Link prediction is a fundamental task for graph analysis with important applications on the Web, such as social network analysis and recommendation systems.
We propose a novel strategy of multi-level negative sampling that enables negative node generation with flexible and controllable hardness'' levels from the latent space.
Our method, called Conditional Diffusion-based Multi-level Negative Sampling (DMNS), leverages the Markov chain property of diffusion models to generate negative nodes in multiple levels of variable hardness.
arXiv Detail & Related papers (2024-03-25T23:07:31Z) - Learning How to Propagate Messages in Graph Neural Networks [55.2083896686782]
This paper studies the problem of learning message propagation strategies for graph neural networks (GNNs)
We introduce the optimal propagation steps as latent variables to help find the maximum-likelihood estimation of the GNN parameters.
Our proposed framework can effectively learn personalized and interpretable propagate strategies of messages in GNNs.
arXiv Detail & Related papers (2023-10-01T15:09:59Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Enhancing Graph Contrastive Learning with Node Similarity [4.60032347615771]
Graph contrastive learning (GCL) is a representative framework for self-supervised learning.
GCL learns node representations by contrasting semantically similar nodes (positive samples) and dissimilar nodes (negative samples) with anchor nodes.
We propose an enhanced objective that contains all positive samples and no false-negative samples.
arXiv Detail & Related papers (2022-08-13T22:49:20Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Relation-aware Graph Attention Model With Adaptive Self-adversarial
Training [29.240686573485718]
This paper describes an end-to-end solution for the relationship prediction task in heterogeneous, multi-relational graphs.
We particularly address two building blocks in the pipeline, namely heterogeneous graph representation learning and negative sampling.
We introduce a parameter-free negative sampling technique -- adaptive self-adversarial (ASA) negative sampling.
arXiv Detail & Related papers (2021-02-14T16:11:56Z) - SCE: Scalable Network Embedding from Sparsest Cut [20.08464038805681]
Large-scale network embedding is to learn a latent representation for each node in an unsupervised manner.
A key of success to such contrastive learning methods is how to draw positive and negative samples.
In this paper, we propose SCE for unsupervised network embedding only using negative samples for training.
arXiv Detail & Related papers (2020-06-30T03:18:15Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Hop Sampling: A Simple Regularized Graph Learning for Non-Stationary
Environments [12.251253742049437]
Graph representation learning is gaining popularity in a wide range of applications, such as social networks analysis.
Applying graph neural networks (GNNs) in a real-world application is still challenging due to non-stationary environments.
We present Hop Sampling, a straightforward regularization method that can effectively prevent GNNs from overfishing.
arXiv Detail & Related papers (2020-06-26T10:22:57Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.