Learning from the Dark: Boosting Graph Convolutional Neural Networks
with Diverse Negative Samples
- URL: http://arxiv.org/abs/2210.00728v1
- Date: Mon, 3 Oct 2022 06:14:21 GMT
- Title: Learning from the Dark: Boosting Graph Convolutional Neural Networks
with Diverse Negative Samples
- Authors: Wei Duan, Junyu Xuan, Maoying Qiao, Jie Lu
- Abstract summary: Graphs have a large, dark, all-but forgotten world in which we find the non-neighbouring nodes (negative samples)
We show that this great dark world holds a substantial amount of information that might be useful for representation learning.
Our overall idea is to select appropriate negative samples for each node and incorporate the negative information contained in these samples into the representation updates.
- Score: 19.588559820438718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Neural Networks (GCNs) has been generally accepted to be
an effective tool for node representations learning. An interesting way to
understand GCNs is to think of them as a message passing mechanism where each
node updates its representation by accepting information from its neighbours
(also known as positive samples). However, beyond these neighbouring nodes,
graphs have a large, dark, all-but forgotten world in which we find the
non-neighbouring nodes (negative samples). In this paper, we show that this
great dark world holds a substantial amount of information that might be useful
for representation learning. Most specifically, it can provide negative
information about the node representations. Our overall idea is to select
appropriate negative samples for each node and incorporate the negative
information contained in these samples into the representation updates.
Moreover, we show that the process of selecting the negative samples is not
trivial. Our theme therefore begins by describing the criteria for a good
negative sample, followed by a determinantal point process algorithm for
efficiently obtaining such samples. A GCN, boosted by diverse negative samples,
then jointly considers the positive and negative information when passing
messages. Experimental evaluations show that this idea not only improves the
overall performance of standard representation learning but also significantly
alleviates over-smoothing problems.
Related papers
- Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method [17.760628718072144]
InfoNCE uses augmentation techniques to obtain two views, where a node in one view acts as the anchor, the corresponding node in the other view serves as the positive sample, and all other nodes are regarded as negative samples.
The goal is to minimize the distance between the anchor node and positive samples and maximize the distance to negative samples.
Due to the lack of label information during training, InfoNCE inevitably treats samples from the same class as negative samples, leading to the issue of false negative samples.
We propose GraphRank, a simple yet efficient graph contrastive learning method that addresses the problem of false negative samples
arXiv Detail & Related papers (2023-10-23T03:15:57Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Graph Convolutional Neural Networks with Diverse Negative Samples via
Decomposed Determinant Point Processes [21.792376993468064]
Graph convolutional networks (GCNs) have achieved great success in graph representation learning.
In this paper, we use quality-diversity decomposition in determinant point processes to obtain diverse negative samples.
We propose a new shortest-path-base method to improve computational efficiency.
arXiv Detail & Related papers (2022-12-05T06:31:31Z) - SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval [126.22182758461244]
We show that according to the measured relevance scores, the negatives ranked around the positives are generally more informative and less likely to be false negatives.
We propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.
arXiv Detail & Related papers (2022-10-21T07:18:05Z) - Enhancing Graph Contrastive Learning with Node Similarity [4.60032347615771]
Graph contrastive learning (GCL) is a representative framework for self-supervised learning.
GCL learns node representations by contrasting semantically similar nodes (positive samples) and dissimilar nodes (negative samples) with anchor nodes.
We propose an enhanced objective that contains all positive samples and no false-negative samples.
arXiv Detail & Related papers (2022-08-13T22:49:20Z) - Node Representation Learning in Graph via Node-to-Neighbourhood Mutual
Information Maximization [27.701736055800314]
Key towards learning informative node representations in graphs lies in how to gain contextual information from the neighbourhood.
We present a self-supervised node representation learning strategy via directly maximizing the mutual information between the hidden representations of nodes and their neighbourhood.
Our framework is optimized via a surrogate contrastive loss, where the positive selection underpins the quality and efficiency of representation learning.
arXiv Detail & Related papers (2022-03-23T08:21:10Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - CatGCN: Graph Convolutional Networks with Categorical Node Features [99.555850712725]
CatGCN is tailored for graph learning when the node features are categorical.
We train CatGCN in an end-to-end fashion and demonstrate it on semi-supervised node classification.
arXiv Detail & Related papers (2020-09-11T09:25:17Z) - SCE: Scalable Network Embedding from Sparsest Cut [20.08464038805681]
Large-scale network embedding is to learn a latent representation for each node in an unsupervised manner.
A key of success to such contrastive learning methods is how to draw positive and negative samples.
In this paper, we propose SCE for unsupervised network embedding only using negative samples for training.
arXiv Detail & Related papers (2020-06-30T03:18:15Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.