Dual Adversarial Perturbators Generate rich Views for Recommendation
- URL: http://arxiv.org/abs/2409.06719v1
- Date: Mon, 26 Aug 2024 15:19:35 GMT
- Title: Dual Adversarial Perturbators Generate rich Views for Recommendation
- Authors: Lijun Zhang, Yuan Yao, Haibo Ye,
- Abstract summary: AvoGCL emulates curriculum learning by applying adversarial training to graph structures and embedding perturbations.
Experiments on three real-world datasets demonstrate that AvoGCL significantly outperforms the state-of-the-art competitors.
- Score: 16.284670207195056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) has been extensively studied and leveraged as a potent tool in recommender systems. Most existing GCL-based recommenders generate contrastive views by altering the graph structure or introducing perturbations to embedding. While these methods effectively enhance learning from sparse data, they risk performance degradation or even training collapse when the differences between contrastive views become too pronounced. To mitigate this issue, we employ curriculum learning to incrementally increase the disparity between contrastive views, enabling the model to gain from more challenging scenarios. In this paper, we propose a dual-adversarial graph learning approach, AvoGCL, which emulates curriculum learning by progressively applying adversarial training to graph structures and embedding perturbations. Specifically, AvoGCL construct contrastive views by reducing graph redundancy and generating adversarial perturbations in the embedding space, and achieve better results by gradually increasing the difficulty of contrastive views. Extensive experiments on three real-world datasets demonstrate that AvoGCL significantly outperforms the state-of-the-art competitors.
Related papers
- TwinCL: A Twin Graph Contrastive Learning Model for Collaborative Filtering [20.26347686022996]
We introduce a twin encoder in place of random augmentations, demonstrating the redundancy of traditional augmentation techniques.
Our proposed Twin Graph Contrastive Learning model -- TwinCL -- aligns positive pairs of user and item embeddings and the representations from the twin encoder.
Our theoretical analysis and experimental results show that the proposed model contributes to better recommendation accuracy and training efficiency performance.
arXiv Detail & Related papers (2024-09-27T22:31:08Z) - Multi-Task Curriculum Graph Contrastive Learning with Clustering Entropy Guidance [25.5510013711661]
We propose the Clustering-guided Curriculum Graph contrastive Learning (CCGL) framework.
CCGL uses clustering entropy as the guidance of the following graph augmentation and contrastive learning.
Experimental results demonstrate that CCGL has achieved excellent performance compared to state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-22T02:18:47Z) - Towards Robust Recommendation via Decision Boundary-aware Graph Contrastive Learning [25.514007761856632]
graph contrastive learning (GCL) has received increasing attention in recommender systems due to its effectiveness in reducing bias caused by data sparsity.
We argue that these methods struggle to balance between semantic invariance and view hardness across the dynamic training process.
We propose a novel GCL-based recommendation framework RGCL, which effectively maintains the semantic invariance of contrastive pairs and dynamically adapts as the model capability evolves.
arXiv Detail & Related papers (2024-07-14T13:03:35Z) - Adversarial Curriculum Graph Contrastive Learning with Pair-wise
Augmentation [35.875976206333185]
ACGCL capitalizes on the merits of pair-wise augmentation to engender graph-level positive and negative samples with controllable similarity.
Within the ACGCL framework, we have devised a novel adversarial curriculum training methodology.
A comprehensive assessment of ACGCL is conducted through extensive experiments on six well-known benchmark datasets.
arXiv Detail & Related papers (2024-02-16T06:17:50Z) - Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - Adversarial Cross-View Disentangled Graph Contrastive Learning [30.97720522293301]
We introduce ACDGCL, which follows the information bottleneck principle to learn minimal yet sufficient representations from graph data.
We empirically demonstrate that our proposed model outperforms the state-of-the-arts on graph classification task over multiple benchmark datasets.
arXiv Detail & Related papers (2022-09-16T03:48:39Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Dual Space Graph Contrastive Learning [82.81372024482202]
We propose a novel graph contrastive learning method, namely textbfDual textbfSpace textbfGraph textbfContrastive (DSGC) Learning.
Since both spaces have their own advantages to represent graph data in the embedding spaces, we hope to utilize graph contrastive learning to bridge the spaces and leverage advantages from both sides.
arXiv Detail & Related papers (2022-01-19T04:10:29Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.