Negative as Positive: Enhancing Out-of-distribution Generalization for Graph Contrastive Learning
- URL: http://arxiv.org/abs/2405.16224v1
- Date: Sat, 25 May 2024 13:29:31 GMT
- Title: Negative as Positive: Enhancing Out-of-distribution Generalization for Graph Contrastive Learning
- Authors: Zixu Wang, Bingbing Xu, Yige Yuan, Huawei Shen, Xueqi Cheng,
- Abstract summary: We propose a novel strategy "Negative as Positive", where the most semantically similar cross-domain negative pairs are treated as positive during Graph contrastive learning (GCL)
Our experimental results, spanning a wide array of datasets, confirm that this method substantially improves the OOD generalization performance of GCL.
- Score: 60.61079931266331
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph contrastive learning (GCL), standing as the dominant paradigm in the realm of graph pre-training, has yielded considerable progress. Nonetheless, its capacity for out-of-distribution (OOD) generalization has been relatively underexplored. In this work, we point out that the traditional optimization of InfoNCE in GCL restricts the cross-domain pairs only to be negative samples, which inevitably enlarges the distribution gap between different domains. This violates the requirement of domain invariance under OOD scenario and consequently impairs the model's OOD generalization performance. To address this issue, we propose a novel strategy "Negative as Positive", where the most semantically similar cross-domain negative pairs are treated as positive during GCL. Our experimental results, spanning a wide array of datasets, confirm that this method substantially improves the OOD generalization performance of GCL.
Related papers
- Graph Representation Learning via Causal Diffusion for Out-of-Distribution Recommendation [8.826417093212099]
Graph Neural Networks (GNNs)-based recommendation algorithms assume that training and testing data are drawn from independent and identically distributed spaces.
This assumption often fails in the presence of out-of-distribution (OOD) data, resulting in significant performance degradation.
We propose a novel approach, graph representation learning via causal diffusion (CausalDiffRec) for OOD recommendation.
arXiv Detail & Related papers (2024-08-01T11:51:52Z) - Smoothed Graph Contrastive Learning via Seamless Proximity Integration [35.73306919276754]
Graph contrastive learning (GCL) aligns node representations by classifying node pairs into positives and negatives.
We present a Smoothed Graph Contrastive Learning model (SGCL) that injects proximity information associated with positive/negative pairs in the contrastive loss.
The proposed SGCL adjusts the penalties associated with node pairs in the contrastive loss by incorporating three distinct smoothing techniques.
arXiv Detail & Related papers (2024-02-23T11:32:46Z) - Learning a Gaussian Mixture for Sparsity Regularization in Inverse
Problems [2.375943263571389]
In inverse problems, the incorporation of a sparsity prior yields a regularization effect on the solution.
We propose a probabilistic sparsity prior formulated as a mixture of Gaussians, capable of modeling sparsity with respect to a generic basis.
We put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network.
arXiv Detail & Related papers (2024-01-29T22:52:57Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - SelfReg: Self-supervised Contrastive Regularization for Domain
Generalization [7.512471799525974]
We propose a new regularization method for domain generalization based on contrastive learning, self-supervised contrastive regularization (SelfReg)
The proposed approach use only positive data pairs, thus it resolves various problems caused by negative pair sampling.
In the recent benchmark, DomainBed, the proposed method shows comparable performance to the conventional state-of-the-art alternatives.
arXiv Detail & Related papers (2021-04-20T09:08:29Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.