Link Prediction with Non-Contrastive Learning
- URL: http://arxiv.org/abs/2211.14394v2
- Date: Tue, 28 Mar 2023 18:38:56 GMT
- Title: Link Prediction with Non-Contrastive Learning
- Authors: William Shiao, Zhichun Guo, Tong Zhao, Evangelos E. Papalexakis, Yozen
Liu, Neil Shah
- Abstract summary: Graph self-supervised learning (SSL) aims to derive useful node representations without labeled data.
Many state-of-the-art graph SSL methods are contrastive methods, which use a combination of positive and negative samples.
Recent literature introduced non-contrastive methods, which instead only use positive samples.
- Score: 19.340519670329382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A recent focal area in the space of graph neural networks (GNNs) is graph
self-supervised learning (SSL), which aims to derive useful node
representations without labeled data. Notably, many state-of-the-art graph SSL
methods are contrastive methods, which use a combination of positive and
negative samples to learn node representations. Owing to challenges in negative
sampling (slowness and model sensitivity), recent literature introduced
non-contrastive methods, which instead only use positive samples. Though such
methods have shown promising performance in node-level tasks, their suitability
for link prediction tasks, which are concerned with predicting link existence
between pairs of nodes (and have broad applicability to recommendation systems
contexts) is yet unexplored. In this work, we extensively evaluate the
performance of existing non-contrastive methods for link prediction in both
transductive and inductive settings. While most existing non-contrastive
methods perform poorly overall, we find that, surprisingly, BGRL generally
performs well in transductive settings. However, it performs poorly in the more
realistic inductive settings where the model has to generalize to links to/from
unseen nodes. We find that non-contrastive models tend to overfit to the
training graph and use this analysis to propose T-BGRL, a novel non-contrastive
framework that incorporates cheap corruptions to improve the generalization
ability of the model. This simple modification strongly improves inductive
performance in 5/6 of our datasets, with up to a 120% improvement in
Hits@50--all with comparable speed to other non-contrastive baselines and up to
14x faster than the best-performing contrastive baseline. Our work imparts
interesting findings about non-contrastive learning for link prediction and
paves the way for future researchers to further expand upon this area.
Related papers
- Bootstrap Latents of Nodes and Neighbors for Graph Self-Supervised Learning [27.278097015083343]
Contrastive learning requires negative samples to prevent model collapse and learn discriminative representations.
We introduce a cross-attention module to predict the supportiveness score of a neighbor with respect to the anchor node.
Our method mitigates class collision from negative and noisy positive samples, concurrently enhancing intra-class compactness.
arXiv Detail & Related papers (2024-08-09T14:17:52Z) - Breaking the Entanglement of Homophily and Heterophily in
Semi-supervised Node Classification [25.831508778029097]
We introduce AMUD, which quantifies the relationship between node profiles and topology from a statistical perspective.
We also propose ADPA as a new directed graph learning paradigm for AMUD.
arXiv Detail & Related papers (2023-12-07T07:54:11Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [86.87385758192566]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
We propose a novel GNN architecture whereby the emphforward pass explicitly depends on emphboth positive (as is typical) and negative (unique to our approach) edges.
This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls
and New Benchmarking [66.83273589348758]
Link prediction attempts to predict whether an unseen edge exists based on only a portion of edges of a graph.
A flurry of methods have been introduced in recent years that attempt to make use of graph neural networks (GNNs) for this task.
New and diverse datasets have also been created to better evaluate the effectiveness of these new models.
arXiv Detail & Related papers (2023-06-18T01:58:59Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Relation-aware Graph Attention Model With Adaptive Self-adversarial
Training [29.240686573485718]
This paper describes an end-to-end solution for the relationship prediction task in heterogeneous, multi-relational graphs.
We particularly address two building blocks in the pipeline, namely heterogeneous graph representation learning and negative sampling.
We introduce a parameter-free negative sampling technique -- adaptive self-adversarial (ASA) negative sampling.
arXiv Detail & Related papers (2021-02-14T16:11:56Z) - Structure Aware Negative Sampling in Knowledge Graphs [18.885368822313254]
A crucial aspect of contrastive learning approaches is the choice of corruption distribution that generates hard negative samples.
We propose Structure Aware Negative Sampling (SANS), an inexpensive negative sampling strategy that utilizes the rich graph structure by selecting negative samples from a node's k-hop neighborhood.
arXiv Detail & Related papers (2020-09-23T19:57:00Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z) - PushNet: Efficient and Adaptive Neural Message Passing [1.9121961872220468]
Message passing neural networks have recently evolved into a state-of-the-art approach to representation learning on graphs.
Existing methods perform synchronous message passing along all edges in multiple subsequent rounds.
We consider a novel asynchronous message passing approach where information is pushed only along the most relevant edges until convergence.
arXiv Detail & Related papers (2020-03-04T18:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.