Image Fusion for Cross-Domain Sequential Recommendation
- URL: http://arxiv.org/abs/2502.15694v2
- Date: Thu, 27 Feb 2025 03:08:16 GMT
- Title: Image Fusion for Cross-Domain Sequential Recommendation
- Authors: Wangyu Wu, Siqi Song, Xianglin Qiu, Xiaowei Huang, Fei Ma, Jimin Xiao,
- Abstract summary: Cross-Domain Sequential Recommendation aims to predict future user interactions based on historical interactions across multiple domains.<n>Key challenge in CDSR is effectively capturing cross-domain user preferences by fully leveraging both intra-sequence and inter-sequence item interactions.<n>We propose a novel method, Image Fusion for Cross-Domain Sequential Recommendation (IFCDSR), which incorporates item image information to better capture visual preferences.
- Score: 20.37668418178215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-Domain Sequential Recommendation (CDSR) aims to predict future user interactions based on historical interactions across multiple domains. The key challenge in CDSR is effectively capturing cross-domain user preferences by fully leveraging both intra-sequence and inter-sequence item interactions. In this paper, we propose a novel method, Image Fusion for Cross-Domain Sequential Recommendation (IFCDSR), which incorporates item image information to better capture visual preferences. Our approach integrates a frozen CLIP model to generate image embeddings, enriching original item embeddings with visual data from both intra-sequence and inter-sequence interactions. Additionally, we employ a multiple attention layer to capture cross-domain interests, enabling joint learning of single-domain and cross-domain user preferences. To validate the effectiveness of IFCDSR, we re-partitioned four e-commerce datasets and conducted extensive experiments. Results demonstrate that IFCDSR significantly outperforms existing methods.
Related papers
- Hierarchical Attention Fusion of Visual and Textual Representations for Cross-Domain Sequential Recommendation [19.654959889052638]
Cross-Domain Sequential Recommendation (CDSR) predicts user behavior by leveraging historical interactions across multiple domains.
We propose Hierarchical Attention Fusion of Visual and Textual Representations (HAF-VT), a novel approach integrating visual and textual data to enhance cognitive modeling.
A hierarchical attention mechanism jointly learns single-domain and cross-domain preferences, mimicking human information integration.
arXiv Detail & Related papers (2025-04-21T13:18:54Z) - A Zero-Shot Generalization Framework for LLM-Driven Cross-Domain Sequential Recommendation [5.512301280728178]
Zero-shot cross-domain sequential recommendation (ZCDSR) enables predictions in unseen domains without the need for additional training or fine-tuning.
Recent advancements in large language models (LLMs) have greatly improved ZCDSR by leveraging rich pretrained representations to facilitate cross-domain knowledge transfer.
We propose a novel framework designed to enhance LLM-based ZCDSR by improving cross-domain alignment at both the item and sequential levels.
arXiv Detail & Related papers (2025-01-31T15:43:21Z) - Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential Recommendation [66.72195610471624]
Cross-Domain Sequential Recommendation aims to mine and transfer users' sequential preferences across different domains.
We propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach.
arXiv Detail & Related papers (2024-06-05T09:19:54Z) - Information Maximization via Variational Autoencoders for Cross-Domain Recommendation [26.099908029810756]
We introduce a new CDSR framework named Information Maximization Variational Autoencoder (textbftextttIM-VAE)
Here, we suggest using a Pseudo-Sequence Generator to enhance the user's interaction history input for downstream fine-grained CDSR models.
To the best of our knowledge, this paper is the first CDSR work that considers the information disentanglement and denoising of pseudo-sequences in the open-world recommendation scenario.
arXiv Detail & Related papers (2024-05-31T09:07:03Z) - Learning Partially Aligned Item Representation for Cross-Domain Sequential Recommendation [72.73379646418435]
Cross-domain sequential recommendation aims to uncover and transfer users' sequential preferences across domains.
misaligned item representations can potentially lead to sub-optimal sequential modeling and user representation alignment.
We propose a model-agnostic framework called textbfCross-domain item representation textbfAlignment for textbfCross-textbfDomain textbfSequential textbfRecommendation.
arXiv Detail & Related papers (2024-05-21T03:25:32Z) - A Multimodal Approach for Cross-Domain Image Retrieval [5.5547914920738]
Cross-Domain Image Retrieval (CDIR) is a challenging task in computer vision.
This paper introduces a novel unsupervised approach to CDIR that incorporates textual context by leveraging pre-trained vision-language models.
Our method, dubbed as Caption-Matching (CM), uses generated image captions as a domain-agnostic intermediate representation.
arXiv Detail & Related papers (2024-03-22T12:08:16Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - Mixed Attention Network for Cross-domain Sequential Recommendation [63.983590953727386]
We propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information.
Experimental results on two real-world datasets demonstrate the superiority of our proposed model.
arXiv Detail & Related papers (2023-11-14T16:07:16Z) - FedDCSR: Federated Cross-domain Sequential Recommendation via
Disentangled Representation Learning [17.497009723665116]
We propose FedDCSR, a novel cross-domain sequential recommendation framework via disentangled representation learning.
We introduce an approach called inter-intra domain sequence representation disentanglement (SRD) to disentangle user sequence features into domain-shared and domain-exclusive features.
In addition, we design an intra domain contrastive infomax (CIM) strategy to learn richer domain-exclusive features of users by performing data augmentation on user sequences.
arXiv Detail & Related papers (2023-09-15T14:23:20Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - A cross-domain recommender system using deep coupled autoencoders [77.86290991564829]
Two novel coupled autoencoder-based deep learning methods are proposed for cross-domain recommendation.
The first method aims to simultaneously learn a pair of autoencoders in order to reveal the intrinsic representations of the items in the source and target domains.
The second method is derived based on a new joint regularized optimization problem, which employs two autoencoders to generate in a deep and non-linear manner the user and item-latent factors.
arXiv Detail & Related papers (2021-12-08T15:14:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.