Improving Sequential Recommendation Consistency with Self-Supervised
Imitation
- URL: http://arxiv.org/abs/2106.14031v2
- Date: Tue, 29 Jun 2021 11:09:33 GMT
- Title: Improving Sequential Recommendation Consistency with Self-Supervised
Imitation
- Authors: Xu Yuan, Hongshen Chen, Yonghao Song, Xiaofang Zhao, Zhuoye Ding, Zhen
He, Bo Long
- Abstract summary: We propose a model, SSI, to improve sequential recommendation consistency with Self-Supervised expressiveness.
To take advantage of all three independent aspects of consistency-enhanced knowledge, we establish an integrated imitation learning framework.
Experiments on four real-world datasets show that SSI effectively outperforms the state-of-the-art sequential recommendation methods.
- Score: 31.156591972077162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most sequential recommendation models capture the features of consecutive
items in a user-item interaction history. Though effective, their
representation expressiveness is still hindered by the sparse learning signals.
As a result, the sequential recommender is prone to make inconsistent
predictions. In this paper, we propose a model, SSI, to improve sequential
recommendation consistency with Self-Supervised Imitation. Precisely, we
extract the consistency knowledge by utilizing three self-supervised
pre-training tasks, where temporal consistency and persona consistency capture
user-interaction dynamics in terms of the chronological order and persona
sensitivities, respectively. Furthermore, to provide the model with a global
perspective, global session consistency is introduced by maximizing the mutual
information among global and local interaction sequences. Finally, to
comprehensively take advantage of all three independent aspects of
consistency-enhanced knowledge, we establish an integrated imitation learning
framework. The consistency knowledge is effectively internalized and
transferred to the student model by imitating the conventional prediction logit
as well as the consistency-enhanced item representations. In addition, the
flexible self-supervised imitation framework can also benefit other student
recommenders. Experiments on four real-world datasets show that SSI effectively
outperforms the state-of-the-art sequential recommendation methods.
Related papers
- Give Users the Wheel: Towards Promptable Recommendation Paradigm [21.39017335979666]
Decoupled Promptable Sequential Recommendation (DPR) is a model-agnostic framework that empowers conventional sequential backbones to support Promptable Recommendation.<n>DPR modulates the latent user representation directly within the retrieval space.<n>It significantly outperforms state-of-the-art baselines in prompt-guided tasks.
arXiv Detail & Related papers (2026-02-21T18:41:28Z) - Generative Reasoning Recommendation via LLMs [48.45009951684554]
Large language models (LLMs) face fundamental challenges in functioning as generative reasoning recommendation models (GRRMs)<n>This work explores how to build GRRMs by adapting pre-trained LLMs, which achieves a unified understanding-reasoning-prediction manner for recommendation tasks.<n>We propose GREAM, an end-to-end framework that integrates three components: Collaborative-Semantic Alignment, Reasoning Curriculum Activation, and Sparse-Regularized Group Policy Optimization.
arXiv Detail & Related papers (2025-10-23T17:59:31Z) - Dynamic Programming Techniques for Enhancing Cognitive Representation in Knowledge Tracing [125.75923987618977]
We propose the Cognitive Representation Dynamic Programming based Knowledge Tracing (CRDP-KT) model.<n>It is a dynamic programming algorithm to optimize cognitive representations based on the difficulty of the questions and the performance intervals between them.<n>It provides more accurate and systematic input features for subsequent model training, thereby minimizing distortion in the simulation of cognitive states.
arXiv Detail & Related papers (2025-06-03T14:44:48Z) - Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback [59.768119380109084]
This paper introduces an interactive continual learning paradigm where AI models dynamically learn new skills from real-time human feedback.<n>We propose RiCL, a Reinforced interactive Continual Learning framework leveraging Large Language Models (LLMs)<n>Our RiCL approach substantially outperforms existing combinations of state-of-the-art online continual learning and noisy-label learning methods.
arXiv Detail & Related papers (2025-05-15T03:22:03Z) - Knowledge Retention for Continual Model-Based Reinforcement Learning [11.5581880507344]
DRAGO is a novel approach for continual model-based reinforcement learning.
DRAGO comprises two key components: Synthetic Experience Rehearsal and Regaining Memories Through Exploration.
Empirical evaluations demonstrate that DRAGO is able to preserve knowledge across tasks, achieving superior performance in various continual learning scenarios.
arXiv Detail & Related papers (2025-03-06T09:38:14Z) - Uniting contrastive and generative learning for event sequences models [51.547576949425604]
This study investigates the integration of two self-supervised learning techniques - instance-wise contrastive learning and a generative approach based on restoring masked events in latent space.
Experiments conducted on several public datasets, focusing on sequence classification and next-event type prediction, show that the integrated method achieves superior performance compared to individual approaches.
arXiv Detail & Related papers (2024-08-19T13:47:17Z) - Behavior-Dependent Linear Recurrent Units for Efficient Sequential Recommendation [18.75561256311228]
RecBLR is an Efficient Sequential Recommendation Model based on Behavior-Dependent Linear Recurrent Units.
Our model significantly enhances user behavior modeling and recommendation performance.
arXiv Detail & Related papers (2024-06-18T13:06:58Z) - Rank-N-Contrast: Learning Continuous Representations for Regression [28.926518084216607]
Rank-N-Contrast (RNC) is a framework that learns continuous representations for regression by contrasting samples against each other based on their rankings in the target space.
RNC guarantees the desired order of learned representations in accordance with the target orders.
RNC achieves state-of-the-art performance, highlighting its intriguing properties including better data efficiency, robustness to spurious targets and data corruptions.
arXiv Detail & Related papers (2022-10-03T19:00:38Z) - Enhancing Sequential Recommendation with Graph Contrastive Learning [64.05023449355036]
This paper proposes a novel sequential recommendation framework, namely Graph Contrastive Learning for Sequential Recommendation (GCL4SR)
GCL4SR employs a Weighted Item Transition Graph (WITG), built based on interaction sequences of all users, to provide global context information for each interaction and weaken the noise information in the sequence data.
Experiments on real-world datasets demonstrate that GCL4SR consistently outperforms state-of-the-art sequential recommendation methods.
arXiv Detail & Related papers (2022-05-30T03:53:31Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Intent Contrastive Learning for Sequential Recommendation [86.54439927038968]
We introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.
We propose to leverage the learned intents into SR models via contrastive SSL, which maximizes the agreement between a view of sequence and its corresponding intent.
Experiments conducted on four real-world datasets demonstrate the superiority of the proposed learning paradigm.
arXiv Detail & Related papers (2022-02-05T09:24:13Z) - Improving Sequential Recommendations via Bidirectional Temporal Data Augmentation with Pre-training [46.5064172656298]
We introduce Bidirectional temporal data Augmentation with pre-training (BARec)
Our approach leverages bidirectional temporal augmentation and knowledge-enhanced fine-tuning to synthesize authentic pseudo-prior items.
Our comprehensive experimental analysis on five benchmark datasets confirms the superiority of BARec across both short and elongated sequence contexts.
arXiv Detail & Related papers (2021-12-13T07:33:28Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.