PRISM: Personalized Recommendation via Information Synergy Module
- URL: http://arxiv.org/abs/2601.10944v1
- Date: Fri, 16 Jan 2026 02:17:54 GMT
- Title: PRISM: Personalized Recommendation via Information Synergy Module
- Authors: Xinyi Zhang, Yutong Li, Peijie Sun, Letian Sha, Zhongxuan Han,
- Abstract summary: PRISM is a plug-and-play framework for sequential recommendation (SR)<n>It decomposes multimodal information into unique, redundant, and synergistic components.<n>Experiments on four datasets and three SR backbones demonstrate its effectiveness and versatility.
- Score: 12.797662213207936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimodal sequential recommendation (MSR) leverages diverse item modalities to improve recommendation accuracy, while achieving effective and adaptive fusion remains challenging. Existing MSR models often overlook synergistic information that emerges only through modality combinations. Moreover, they typically assume a fixed importance for different modality interactions across users. To address these limitations, we propose \textbf{P}ersonalized \textbf{R}ecommend-ation via \textbf{I}nformation \textbf{S}ynergy \textbf{M}odule (PRISM), a plug-and-play framework for sequential recommendation (SR). PRISM explicitly decomposes multimodal information into unique, redundant, and synergistic components through an Interaction Expert Layer and dynamically weights them via an Adaptive Fusion Layer guided by user preferences. This information-theoretic design enables fine-grained disentanglement and personalized fusion of multimodal signals. Extensive experiments on four datasets and three SR backbones demonstrate its effectiveness and versatility. The code is available at https://github.com/YutongLi2024/PRISM.
Related papers
- CAMMSR: Category-Guided Attentive Mixture of Experts for Multimodal Sequential Recommendation [23.478610632707728]
We propose a Category-guided Attentive Mixture of Experts model for Multimodal Sequential Recommendation.<n>At its core, CAMMSR introduces a category-guided attentive mixture of experts module, which learns specialized item representations from multiple perspectives.<n>Experiments on four public datasets demonstrate that CAMMSR consistently outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2026-03-04T17:39:35Z) - Cross-Modal Attention Network with Dual Graph Learning in Multimodal Recommendation [12.802844514133255]
Cross-modal Recursive Attention Network with dual graph Embedding (CRANE)<n>We design a core Recursive Cross-Modal Attention (RCA) mechanism that iteratively refines modality features based on cross-correlations in a joint latent space.<n>For symmetric multimodal learning, we explicitly construct users' multimodal profiles by aggregating features of their interacted items.
arXiv Detail & Related papers (2026-01-16T10:09:39Z) - Structurally Refined Graph Transformer for Multimodal Recommendation [13.296555757708298]
We present SRGFormer, a structurally optimized multimodal recommendation model.<n>By modifying the transformer for better integration into our model, we capture the overall behavior patterns of users.<n>Then, we enhance structural information by embedding multimodal information into a hypergraph structure to aid in learning the local structures between users and items.
arXiv Detail & Related papers (2025-11-01T15:18:00Z) - REMOTE: A Unified Multimodal Relation Extraction Framework with Multilevel Optimal Transport and Mixture-of-Experts [20.43650235783012]
Multimodal relation extraction (MRE) is a crucial task in the fields of Knowledge Graph and Multimedia.<n>We propose a novel textitunified multimodal Relation Extraction framework with Multilevel Optimal Transport and mixture-of-Experts.
arXiv Detail & Related papers (2025-09-05T06:52:03Z) - Multi-modal Adaptive Mixture of Experts for Cold-start Recommendation [1.9967512860886603]
MAMEX is a novel framework for multimodal cold-start recommendation.<n>It dynamically leverages latent representation from different modalities.<n>Experiments show MAMEX outperforms state-of-the-art methods in cold-start scenarios.
arXiv Detail & Related papers (2025-08-11T14:47:14Z) - I$^3$-MRec: Invariant Learning with Information Bottleneck for Incomplete Modality Recommendation [56.55935146424585]
We introduce textbfI$3$-MRec, which learns with textbfInformation bottleneck principle for textbfIncomplete textbfModality textbfRecommendation.<n>By treating each modality as a distinct semantic environment, I$3$-MRec employs invariant risk minimization (IRM) to learn preference-oriented representations.<n>I$3$-MRec consistently outperforms existing state-of-the-art MRS methods across various modality-missing scenarios
arXiv Detail & Related papers (2025-08-06T09:29:50Z) - FindRec: Stein-Guided Entropic Flow for Multi-Modal Sequential Recommendation [57.577843653775]
We propose textbfFindRec (textbfFlexible unified textbfinformation textbfdisentanglement for multi-modal sequential textbfRecommendation)<n>A Stein kernel-based Integrated Information Coordination Module (IICM) theoretically guarantees distribution consistency between multimodal features and ID streams.<n>A cross-modal expert routing mechanism that adaptively filters and combines multimodal features based on their contextual relevance.
arXiv Detail & Related papers (2025-07-07T04:09:45Z) - Learning Item Representations Directly from Multimodal Features for Effective Recommendation [51.49251689107541]
multimodal recommender systems predominantly leverage Bayesian Personalized Ranking (BPR) optimization to learn item representations.<n>We propose a novel model (i.e., LIRDRec) that learns item representations directly from multimodal features to augment recommendation performance.
arXiv Detail & Related papers (2025-05-08T05:42:22Z) - CADMR: Cross-Attention and Disentangled Learning for Multimodal Recommender Systems [0.6037276428689637]
We propose CADMR, a novel autoencoder-based multimodal recommender system framework.<n>We evaluate CADMR on three benchmark datasets, demonstrating significant performance improvements over state-of-the-art methods.
arXiv Detail & Related papers (2024-12-03T09:09:52Z) - Federated Vision-Language-Recommendation with Personalized Fusion [48.25209840295838]
This paper introduces FedVLR, a federated VLR framework specially designed for user-specific personalized fusion of vision-language representations.<n>The effectiveness of our proposed FedVLR has been validated on seven benchmark datasets.
arXiv Detail & Related papers (2024-10-11T03:10:09Z) - BiVRec: Bidirectional View-based Multimodal Sequential Recommendation [55.87443627659778]
We propose an innovative framework, BivRec, that jointly trains the recommendation tasks in both ID and multimodal views.
BivRec achieves state-of-the-art performance on five datasets and showcases various practical advantages.
arXiv Detail & Related papers (2024-02-27T09:10:41Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - Knowledge-Enhanced Hierarchical Graph Transformer Network for
Multi-Behavior Recommendation [56.12499090935242]
This work proposes a Knowledge-Enhanced Hierarchical Graph Transformer Network (KHGT) to investigate multi-typed interactive patterns between users and items in recommender systems.
KHGT is built upon a graph-structured neural architecture to capture type-specific behavior characteristics.
We show that KHGT consistently outperforms many state-of-the-art recommendation methods across various evaluation settings.
arXiv Detail & Related papers (2021-10-08T09:44:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.