Towards Robust Graph Contrastive Learning
- URL: http://arxiv.org/abs/2102.13085v1
- Date: Thu, 25 Feb 2021 18:55:15 GMT
- Title: Towards Robust Graph Contrastive Learning
- Authors: Nikola Jovanovi\'c, Zhao Meng, Lukas Faber, Roger Wattenhofer
- Abstract summary: We introduce a new method that increases the adversarial robustness of the learned representations.
We evaluate the learned representations in a preliminary set of experiments, obtaining promising results.
- Score: 7.193373053157517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of adversarially robust self-supervised learning on
graphs. In the contrastive learning framework, we introduce a new method that
increases the adversarial robustness of the learned representations through i)
adversarial transformations and ii) transformations that not only remove but
also insert edges. We evaluate the learned representations in a preliminary set
of experiments, obtaining promising results. We believe this work takes an
important step towards incorporating robustness as a viable auxiliary task in
graph contrastive learning.
Related papers
- R\'enyiCL: Contrastive Representation Learning with Skew R\'enyi
Divergence [78.15455360335925]
We present a new robust contrastive learning scheme, coined R'enyiCL, which can effectively manage harder augmentations.
Our method is built upon the variational lower bound of R'enyi divergence.
We show that R'enyi contrastive learning objectives perform innate hard negative sampling and easy positive sampling simultaneously.
arXiv Detail & Related papers (2022-08-12T13:37:05Z) - Understanding Contrastive Learning Requires Incorporating Inductive
Biases [64.56006519908213]
Recent attempts to theoretically explain the success of contrastive learning on downstream tasks prove guarantees depending on properties of em augmentations and the value of em contrastive loss of representations.
We demonstrate that such analyses ignore em inductive biases of the function class and training algorithm, even em provably leading to vacuous guarantees in some settings.
arXiv Detail & Related papers (2022-02-28T18:59:20Z) - Revisiting Contrastive Learning through the Lens of Neighborhood
Component Analysis: an Integrated Framework [70.84906094606072]
We show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks.
With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the adversarial accuracy.
arXiv Detail & Related papers (2021-12-08T18:54:11Z) - Simple Contrastive Representation Adversarial Learning for NLP Tasks [17.12062566060011]
Two novel frameworks, supervised contrastive adversarial learning (SCAL) and unsupervised SCAL (USCAL), are proposed.
We employ it to Transformer-based models for natural language understanding, sentence semantic textual similarity and adversarial learning tasks.
Experimental results on GLUE benchmark tasks show that our fine-tuned supervised method outperforms BERT$_base$ over 1.75%.
arXiv Detail & Related papers (2021-11-26T03:16:09Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - A Broad Study on the Transferability of Visual Representations with
Contrastive Learning [15.667240680328922]
We study the transferability of learned representations of contrastive approaches for linear evaluation, full-network transfer, and few-shot recognition.
The results show that the contrastive approaches learn representations that are easily transferable to a different downstream task.
Our analysis reveals that the representations learned from the contrastive approaches contain more low/mid-level semantics than cross-entropy models.
arXiv Detail & Related papers (2021-03-24T22:55:04Z) - Hard Negative Mixing for Contrastive Learning [29.91220669060252]
We argue that an important aspect of contrastive learning, i.e., the effect of hard negatives, has so far been neglected.
We propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead.
arXiv Detail & Related papers (2020-10-02T14:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.