Adversarial Learning Data Augmentation for Graph Contrastive Learning in
Recommendation
- URL: http://arxiv.org/abs/2302.02317v1
- Date: Sun, 5 Feb 2023 06:55:51 GMT
- Title: Adversarial Learning Data Augmentation for Graph Contrastive Learning in
Recommendation
- Authors: Junjie Huang, Qi Cao, Ruobing Xie, Shaoliang Zhang, Feng Xia, Huawei
Shen, Xueqi Cheng
- Abstract summary: We propose Learnable Data Augmentation for Graph Contrastive Learning (LDA-GCL)
Our methods include data augmentation learning and graph contrastive learning, which follow the InfoMin and InfoMax principles, respectively.
In implementation, our methods optimize the adversarial loss function to learn data augmentation and effective representations of users and items.
- Score: 56.10351068286499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Graph Neural Networks (GNNs) achieve remarkable success in
Recommendation. To reduce the influence of data sparsity, Graph Contrastive
Learning (GCL) is adopted in GNN-based CF methods for enhancing performance.
Most GCL methods consist of data augmentation and contrastive loss (e.g.,
InfoNCE). GCL methods construct the contrastive pairs by hand-crafted graph
augmentations and maximize the agreement between different views of the same
node compared to that of other nodes, which is known as the InfoMax principle.
However, improper data augmentation will hinder the performance of GCL. InfoMin
principle, that the good set of views shares minimal information and gives
guidelines to design better data augmentation. In this paper, we first propose
a new data augmentation (i.e., edge-operating including edge-adding and
edge-dropping). Then, guided by InfoMin principle, we propose a novel
theoretical guiding contrastive learning framework, named Learnable Data
Augmentation for Graph Contrastive Learning (LDA-GCL). Our methods include data
augmentation learning and graph contrastive learning, which follow the InfoMin
and InfoMax principles, respectively. In implementation, our methods optimize
the adversarial loss function to learn data augmentation and effective
representations of users and items. Extensive experiments on four public
benchmark datasets demonstrate the effectiveness of LDA-GCL.
Related papers
- LightGCL: Simple Yet Effective Graph Contrastive Learning for
Recommendation [9.181689366185038]
Graph neural clustering network (GNN) is a powerful learning approach for graph-based recommender systems.
In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL.
arXiv Detail & Related papers (2023-02-16T10:16:21Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Adversarial Graph Augmentation to Improve Graph Contrastive Learning [21.54343383921459]
We propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training.
We experimentally validate AD-GCL by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to $14%$ in unsupervised, $6%$ in transfer, and $3%$ in semi-supervised learning settings.
arXiv Detail & Related papers (2021-06-10T15:34:26Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.