Graph Masked Autoencoder for Sequential Recommendation
- URL: http://arxiv.org/abs/2305.04619v3
- Date: Thu, 1 Jun 2023 06:32:46 GMT
- Title: Graph Masked Autoencoder for Sequential Recommendation
- Authors: Yaowen Ye, Lianghao Xia, Chao Huang
- Abstract summary: We propose a Graph Masked AutoEncoder-enhanced sequential Recommender system (MAERec) that adaptively and dynamically distills global item transitional information for self-supervised augmentation.
Our method significantly outperforms state-of-the-art baseline models and can learn more accurate representations against data noise and sparsity.
- Score: 10.319298705782058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While some powerful neural network architectures (e.g., Transformer, Graph
Neural Networks) have achieved improved performance in sequential
recommendation with high-order item dependency modeling, they may suffer from
poor representation capability in label scarcity scenarios. To address the
issue of insufficient labels, Contrastive Learning (CL) has attracted much
attention in recent methods to perform data augmentation through embedding
contrasting for self-supervision. However, due to the hand-crafted property of
their contrastive view generation strategies, existing CL-enhanced models i)
can hardly yield consistent performance on diverse sequential recommendation
tasks; ii) may not be immune to user behavior data noise. In light of this, we
propose a simple yet effective Graph Masked AutoEncoder-enhanced sequential
Recommender system (MAERec) that adaptively and dynamically distills global
item transitional information for self-supervised augmentation. It naturally
avoids the above issue of heavy reliance on constructing high-quality embedding
contrastive views. Instead, an adaptive data reconstruction paradigm is
designed to be integrated with the long-range item dependency modeling, for
informative augmentation in sequential recommendation. Extensive experiments
demonstrate that our method significantly outperforms state-of-the-art baseline
models and can learn more accurate representations against data noise and
sparsity. Our implemented model code is available at
https://github.com/HKUDS/MAERec.
Related papers
- ScribbleGen: Generative Data Augmentation Improves Scribble-supervised Semantic Segmentation [10.225021032417589]
We propose ScribbleGen, a generative data augmentation method for scribble-supervised semantic segmentation.
We leverage a ControlNet diffusion model conditioned on semantic scribbles to produce high-quality training data.
We show that our framework significantly improves segmentation performance on small datasets, even surpassing fully-supervised segmentation.
arXiv Detail & Related papers (2023-11-28T13:44:33Z) - CONVERT:Contrastive Graph Clustering with Reliable Augmentation [110.46658439733106]
We propose a novel CONtrastiVe Graph ClustEring network with Reliable AugmenTation (CONVERT)
In our method, the data augmentations are processed by the proposed reversible perturb-recover network.
To further guarantee the reliability of semantics, a novel semantic loss is presented to constrain the network.
arXiv Detail & Related papers (2023-08-17T13:07:09Z) - LightGCL: Simple Yet Effective Graph Contrastive Learning for
Recommendation [9.181689366185038]
Graph neural clustering network (GNN) is a powerful learning approach for graph-based recommender systems.
In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL.
arXiv Detail & Related papers (2023-02-16T10:16:21Z) - Self-Supervised Hypergraph Transformer for Recommender Systems [25.07482350586435]
Self-Supervised Hypergraph Transformer (SHT)
Self-Supervised Hypergraph Transformer (SHT)
Cross-view generative self-supervised learning component is proposed for data augmentation over the user-item interaction graph.
arXiv Detail & Related papers (2022-07-28T18:40:30Z) - Improving Contrastive Learning with Model Augmentation [123.05700988581806]
The sequential recommendation aims at predicting the next items in user behaviors, which can be solved by characterizing item relationships in sequences.
Due to the data sparsity and noise issues in sequences, a new self-supervised learning (SSL) paradigm is proposed to improve the performance.
arXiv Detail & Related papers (2022-03-25T06:12:58Z) - Generative Modeling Helps Weak Supervision (and Vice Versa) [87.62271390571837]
We propose a model fusing weak supervision and generative adversarial networks.
It captures discrete variables in the data alongside the weak supervision derived label estimate.
It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels.
arXiv Detail & Related papers (2022-03-22T20:24:21Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.