Modeling Domain and Feedback Transitions for Cross-Domain Sequential Recommendation
- URL: http://arxiv.org/abs/2408.08209v1
- Date: Thu, 15 Aug 2024 15:18:55 GMT
- Title: Modeling Domain and Feedback Transitions for Cross-Domain Sequential Recommendation
- Authors: Changshuo Zhang, Teng Shi, Xiao Zhang, Qi Liu, Ruobing Xie, Jun Xu, Ji-Rong Wen,
- Abstract summary: $textTransition2$ is a novel method to model transitions across both domains and types of user feedback.
We introduce a transition-aware graph encoder based on user history, assigning different weights to edges according to the feedback type.
We encode the user history using a cross-transition multi-head self-attention, incorporating various masks to distinguish different types of transitions.
- Score: 60.09293734134179
- License:
- Abstract: Nowadays, many recommender systems encompass various domains to cater to users' diverse needs, leading to user behaviors transitioning across different domains. In fact, user behaviors across different domains reveal changes in preference toward recommended items. For instance, a shift from negative feedback to positive feedback indicates improved user satisfaction. However, existing cross-domain sequential recommendation methods typically model user interests by focusing solely on information about domain transitions, often overlooking the valuable insights provided by users' feedback transitions. In this paper, we propose $\text{Transition}^2$, a novel method to model transitions across both domains and types of user feedback. Specifically, $\text{Transition}^2$ introduces a transition-aware graph encoder based on user history, assigning different weights to edges according to the feedback type. This enables the graph encoder to extract historical embeddings that capture the transition information between different domains and feedback types. Subsequently, we encode the user history using a cross-transition multi-head self-attention, incorporating various masks to distinguish different types of transitions. Finally, we integrate these modules to make predictions across different domains. Experimental results on two public datasets demonstrate the effectiveness of $\text{Transition}^2$.
Related papers
- Cross-domain Transfer of Valence Preferences via a Meta-optimization Approach [17.545983294377958]
CVPM formalizes cross-domain interest transfer as a hybrid architecture of meta-learning and self-supervised learning.
With deep insights into user preferences, we employ differentiated encoders to learn their distributions.
In particular, we treat each user's mapping as two parts, the common transformation and the personalized bias, where the network used to generate the personalized bias is output by a meta-learner.
arXiv Detail & Related papers (2024-06-24T10:02:24Z) - Mixed Attention Network for Cross-domain Sequential Recommendation [63.983590953727386]
We propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information.
Experimental results on two real-world datasets demonstrate the superiority of our proposed model.
arXiv Detail & Related papers (2023-11-14T16:07:16Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - Cross-domain recommendation via user interest alignment [20.387327479445773]
Cross-domain recommendation aims to leverage knowledge from multiple domains to alleviate the data sparsity and cold-start problems in traditional recommender systems.
The general practice of this approach is to train user embeddings in each domain separately and then aggregate them in a plain manner.
We propose a novel cross-domain recommendation framework, namely COAST, to improve recommendation performance on dual domains.
arXiv Detail & Related papers (2023-01-26T23:54:41Z) - Diverse Preference Augmentation with Multiple Domains for Cold-start
Recommendations [92.47380209981348]
We propose a Diverse Preference Augmentation framework with multiple source domains based on meta-learning.
We generate diverse ratings in a new domain of interest to handle overfitting on the case of sparse interactions.
These ratings are introduced into the meta-training procedure to learn a preference meta-learner, which produces good generalization ability.
arXiv Detail & Related papers (2022-04-01T10:10:50Z) - Cross-domain User Preference Learning for Cold-start Recommendation [32.83868293457142]
Cross-domain cold-start recommendation is an increasingly emerging issue for recommender systems.
It is critical to learn a user's preference from the source domain and transfer it into the target domain.
We propose a self-trained Cross-dOmain User Preference LEarning framework, targeting cold-start recommendation with various semantic tags.
arXiv Detail & Related papers (2021-12-07T12:57:05Z) - Dual Metric Learning for Effective and Efficient Cross-Domain
Recommendations [85.6250759280292]
Cross domain recommender systems have been increasingly valuable for helping consumers identify useful items in different applications.
Existing cross-domain models typically require large number of overlap users, which can be difficult to obtain in some applications.
We propose a novel cross-domain recommendation model based on dual learning that transfers information between two related domains in an iterative manner.
arXiv Detail & Related papers (2021-04-17T09:18:59Z) - CATN: Cross-Domain Recommendation for Cold-Start Users via Aspect
Transfer Network [49.35977893592626]
We propose a cross-domain recommendation framework via aspect transfer network for cold-start users (named CATN)
CATN is devised to extract multiple aspects for each user and each item from their review documents, and learn aspect correlations across domains with an attention mechanism.
On real-world datasets, the proposed CATN outperforms SOTA models significantly in terms of rating prediction accuracy.
arXiv Detail & Related papers (2020-05-21T10:05:19Z) - Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain
Recommendation [19.106717948585445]
We develop scalable neural layer-transfer approaches for cross-domain learning.
Our key intuition is to guide neural collaborative filtering with domain-invariant components shared across the dense and sparse domains.
We show the effectiveness and scalability of our approach on two public datasets and a massive transaction dataset from Visa.
arXiv Detail & Related papers (2020-05-21T05:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.