DAS: Dual-Aligned Semantic IDs Empowered Industrial Recommender System
- URL: http://arxiv.org/abs/2508.10584v1
- Date: Thu, 14 Aug 2025 12:22:51 GMT
- Title: DAS: Dual-Aligned Semantic IDs Empowered Industrial Recommender System
- Authors: Wencai Ye, Mingjie Sun, Shaoyun Shi, Peng Wang, Wenjin Wu, Peng Jiang,
- Abstract summary: We propose a one-stage Dual-Aligned Semantic IDs (DAS) method that simultaneously optimize quantization and alignment.<n>DAS achieves more efficient alignment between the semantic IDs and collaborative signals, with the following two innovative approaches.<n>DAS is successfully deployed across various advertising scenarios at Kuaishou App, serving over 400 million users daily.
- Score: 15.648601380538413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic IDs are discrete identifiers generated by quantizing the Multi-modal Large Language Models (MLLMs) embeddings, enabling efficient multi-modal content integration in recommendation systems. However, their lack of collaborative signals results in a misalignment with downstream discriminative and generative recommendation objectives. Recent studies have introduced various alignment mechanisms to address this problem, but their two-stage framework design still leads to two main limitations: (1) inevitable information loss during alignment, and (2) inflexibility in applying adaptive alignment strategies, consequently constraining the mutual information maximization during the alignment process. To address these limitations, we propose a novel and flexible one-stage Dual-Aligned Semantic IDs (DAS) method that simultaneously optimizes quantization and alignment, preserving semantic integrity and alignment quality while avoiding the information loss typically associated with two-stage methods. Meanwhile, DAS achieves more efficient alignment between the semantic IDs and collaborative signals, with the following two innovative and effective approaches: (1) Multi-view Constrative Alignment: To maximize mutual information between semantic IDs and collaborative signals, we first incorporate an ID-based CF debias module, and then design three effective contrastive alignment methods: dual user-to-item (u2i), dual item-to-item/user-to-user (i2i/u2u), and dual co-occurrence item-to-item/user-to-user (i2i/u2u). (2) Dual Learning: By aligning the dual quantizations of users and ads, the constructed semantic IDs for users and ads achieve stronger alignment. Finally, we conduct extensive offline experiments and online A/B tests to evaluate DAS's effectiveness, which is now successfully deployed across various advertising scenarios at Kuaishou App, serving over 400 million users daily.
Related papers
- R2LED: Equipping Retrieval and Refinement in Lifelong User Modeling with Semantic IDs for CTR Prediction [23.668401664583758]
We propose a novel paradigm that equips retrieval and refinement in Lifelong User Modeling with SEmantic IDs (R2LED)<n>First, we introduce a Multi-route Mixed Retrieval for the retrieval stage. On the other hand, a mixed retrieval mechanism is proposed to efficiently retrieve candidates from both collaborative and semantic views.<n>For refinement, we design a Bi-level Fusion Refinement, including a target-aware cross-attention for route-level fusion and a gate mechanism for SID-level fusion.
arXiv Detail & Related papers (2026-02-06T11:27:20Z) - DOS: Dual-Flow Orthogonal Semantic IDs for Recommendation in Meituan [8.259886050799922]
We propose Dual-Flow Orthogonal Semantic IDs (DOS) method for generative recommendation systems.<n>DOS employs a user-item dual flow-framework that leverages collaborative signals to align the Semantic ID codebook space with the generation space.<n>DOS has been successfully deployed in Meituan's mobile application, serving hundreds of millions of users.
arXiv Detail & Related papers (2026-02-04T11:43:42Z) - RecGOAT: Graph Optimal Adaptive Transport for LLM-Enhanced Multimodal Recommendation with Dual Semantic Alignment [23.738860191046538]
We propose RecGOAT, a novel yet simple dual semantic alignment framework for multimodal recommendation.<n>We show that RecGOAT achieves state-of-the-art performance, empirically validating our theoretical insights.
arXiv Detail & Related papers (2026-01-31T11:58:38Z) - The Best of the Two Worlds: Harmonizing Semantic and Hash IDs for Sequential Recommendation [51.62815306481903]
We propose textbfname, a novel framework that harmonizes the SID and HID. Specifically, we devise a dual-branch modeling architecture that enables the model to capture both the multi-granular semantics within SID while preserving the unique collaborative identity of HID.<n>Experiments on three real-world datasets show that name balances recommendation quality for both head and tail items while surpassing the existing baselines.
arXiv Detail & Related papers (2025-12-11T07:50:53Z) - MMQ: Multimodal Mixture-of-Quantization Tokenization for Semantic ID Generation and User Behavioral Adaptation [16.81485354427923]
We propose Multimodal Mixture-of-Quantization (MMQ), a two-stage framework that trains a novel multimodal tokenizer.<n> MMQ unifies multimodal synergy, specificity, and behavioral adaptation, providing a scalable and versatile solution for both generative retrieval and discriminative ranking tasks.
arXiv Detail & Related papers (2025-08-21T06:15:49Z) - HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs [33.51075655987504]
HiD-VAE is a novel framework that learns hierarchically disentangled item representations through two core innovations.<n>First, HiD-VAE pioneers a hierarchically-supervised quantization process that aligns discrete codes with multi-level item tags.<n>Second, to combat representation entanglement, HiD-VAE incorporates a novel uniqueness loss that directly penalizes latent space overlap.
arXiv Detail & Related papers (2025-08-06T16:45:05Z) - BiXFormer: A Robust Framework for Maximizing Modality Effectiveness in Multi-Modal Semantic Segmentation [55.486872677160015]
We reformulate multi-modal semantic segmentation as a mask-level classification task.<n>We propose BiXFormer, which integrates Unified Modality Matching (UMM) and Cross Modality Alignment (CMA)<n> Experiments on both synthetic and real-world multi-modal benchmarks demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2025-06-04T08:04:58Z) - MisoDICE: Multi-Agent Imitation from Unlabeled Mixed-Quality Demonstrations [5.4482836906033585]
We study offline imitation learning (IL) in cooperative multi-agent settings, where demonstrations have unlabeled mixed quality.<n>Our proposed solution is structured in two stages: trajectory labeling and multi-agent imitation learning.<n>We introduce MisoDICE, a novel multi-agent IL algorithm that leverages these labels to learn robust policies.
arXiv Detail & Related papers (2025-05-24T08:43:42Z) - InfoMAE: Pair-Efficient Cross-Modal Alignment for Multimodal Time-Series Sensing Signals [9.648001493025204]
InfoMAE is a cross-modal alignment framework that tackles the challenge of multimodal pair efficiency under the SSL setting.<n>It enhances downstream multimodal tasks by over 60% with significantly improved multimodal pairing efficiency.<n>It also improves unimodal task accuracy by an average of 22%.
arXiv Detail & Related papers (2025-04-13T20:03:29Z) - BBQRec: Behavior-Bind Quantization for Multi-Modal Sequential Recommendation [15.818669767036592]
We propose a Behavior-Bind multi-modal Quantization for Sequential Recommendation (BBQRec) featuring dual-aligned quantization and semantics-aware sequence modeling.<n>BBQRec disentangles modality-agnostic behavioral patterns from noisy modality-specific features through contrastive codebook learning.<n>We design a discretized similarity reweighting mechanism that dynamically adjusts self-attention scores using quantized semantic relationships.
arXiv Detail & Related papers (2025-04-09T07:19:48Z) - A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.<n>We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration [63.112790050749695]
We introduce EAGER, a novel generative recommendation framework that seamlessly integrates both behavioral and semantic information.
We validate the effectiveness of EAGER on four public benchmarks, demonstrating its superior performance compared to existing methods.
arXiv Detail & Related papers (2024-06-20T06:21:56Z) - NoteLLM-2: Multimodal Large Representation Models for Recommendation [71.87790090964734]
Large Language Models (LLMs) have demonstrated exceptional proficiency in text understanding and embedding tasks.<n>Their potential in multimodal representation, particularly for item-to-item (I2I) recommendations, remains underexplored.<n>We propose an end-to-end fine-tuning method that customizes the integration of any existing LLMs and vision encoders for efficient multimodal representation.
arXiv Detail & Related papers (2024-05-27T03:24:01Z) - Learnable Item Tokenization for Generative Recommendation [78.30417863309061]
We propose LETTER (a LEarnable Tokenizer for generaTivE Recommendation), which integrates hierarchical semantics, collaborative signals, and code assignment diversity.
LETTER incorporates Residual Quantized VAE for semantic regularization, a contrastive alignment loss for collaborative regularization, and a diversity loss to mitigate code assignment bias.
arXiv Detail & Related papers (2024-05-12T15:49:38Z) - Disentangling ID and Modality Effects for Session-based Recommendation [46.09367252640389]
We propose a novel framework DIMO to disentangle the effects of ID and modality in the task.
DIMO provides recommendations via causal inference and further creates two templates for generating explanations.
arXiv Detail & Related papers (2024-04-19T15:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.