S$^2$GR: Stepwise Semantic-Guided Reasoning in Latent Space for Generative Recommendation
- URL: http://arxiv.org/abs/2601.18664v1
- Date: Mon, 26 Jan 2026 16:40:37 GMT
- Title: S$^2$GR: Stepwise Semantic-Guided Reasoning in Latent Space for Generative Recommendation
- Authors: Zihao Guo, Jian Wang, Ruxin Zhou, Youhua Liu, Jiawei Guo, Jun Zhao, Xiaoxiao Xu, Yongqi Liu, Kaiqiao Zhan,
- Abstract summary: Generative Recommendation (GR) has emerged as a transformative paradigm with its end-to-end generation advantages.<n>Existing GR methods primarily focus on direct Semantic ID (SID) generation from interaction sequences.<n>We propose stepwise semantic-guided reasoning in latent space (S$2$GR), a novel reasoning enhanced GR framework.
- Score: 15.69884243417431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Recommendation (GR) has emerged as a transformative paradigm with its end-to-end generation advantages. However, existing GR methods primarily focus on direct Semantic ID (SID) generation from interaction sequences, failing to activate deeper reasoning capabilities analogous to those in large language models and thus limiting performance potential. We identify two critical limitations in current reasoning-enhanced GR approaches: (1) Strict sequential separation between reasoning and generation steps creates imbalanced computational focus across hierarchical SID codes, degrading quality for SID codes; (2) Generated reasoning vectors lack interpretable semantics, while reasoning paths suffer from unverifiable supervision. In this paper, we propose stepwise semantic-guided reasoning in latent space (S$^2$GR), a novel reasoning enhanced GR framework. First, we establish a robust semantic foundation via codebook optimization, integrating item co-occurrence relationship to capture behavioral patterns, and load balancing and uniformity objectives that maximize codebook utilization while reinforcing coarse-to-fine semantic hierarchies. Our core innovation introduces the stepwise reasoning mechanism inserting thinking tokens before each SID generation step, where each token explicitly represents coarse-grained semantics supervised via contrastive learning against ground-truth codebook cluster distributions ensuring physically grounded reasoning paths and balanced computational focus across all SID codes. Extensive experiments demonstrate the superiority of S$^2$GR, and online A/B test confirms efficacy on large-scale industrial short video platform.
Related papers
- LaSER: Internalizing Explicit Reasoning into Latent Space for Dense Retrieval [74.72139580745511]
LaSER is a novel self-distillation framework that internalizes explicit reasoning into the latent space of retrievers.<n>Our method successfully combines the reasoning depth of explicit CoT pipelines with the inference efficiency of standard dense retrievers.
arXiv Detail & Related papers (2026-03-02T04:11:18Z) - Reasoning-Driven Multimodal LLM for Domain Generalization [72.00754603114187]
We study the role of reasoning in domain generalization using DomainBed-Reasoning dataset.<n>We propose RD-MLDG, a framework with two components: MTCT (Multi-Task Cross-Training) and SARR (Self-Aligned Reasoning Regularization)<n>Experiments on standard DomainBed datasets demonstrate that RD-MLDG achieves complementary state-of-the-art performances.
arXiv Detail & Related papers (2026-02-27T08:10:06Z) - LogitsCoder: Towards Efficient Chain-of-Thought Path Search via Logits Preference Decoding for Code Generation [86.08600027874662]
We propose LogitsCoder, a novel framework that enhances chain-of-thought reasoning through lightweight, logit-level control mechanisms for code generation.<n>We show that LogitsCoder produces more efficient and higher-quality reasoning chains, leading to superior code generation performance compared to baseline methods.
arXiv Detail & Related papers (2026-02-15T08:52:19Z) - End-to-End Semantic ID Generation for Generative Advertisement Recommendation [33.453121305193434]
We propose a Unified SID generation framework for generative advertisement recommendation.<n>Specifically, we jointly optimize embeddings and SIDs in an end-to-end manner from raw advertising data.<n>Experiments demonstrate that UniSID consistently outperforms state-of-the-art SID generation methods.
arXiv Detail & Related papers (2026-02-11T02:38:26Z) - GHS-TDA: A Synergistic Reasoning Framework Integrating Global Hypothesis Space with Topological Data Analysis [27.271992201673083]
Chain-of-Thought (CoT) has been shown to significantly improve the reasoning accuracy of large language models (LLMs)<n>Existing CoT methods suffer from two fundamental limitations.
arXiv Detail & Related papers (2026-02-10T14:00:30Z) - Masked Diffusion Generative Recommendation [14.679550929790151]
Generative recommendation (GR) typically first quantizes continuous item embeddings into multi-level semantic IDs (SIDs)<n>We propose MDGR, a Masked Diffusion Generative Recommendation framework that reshapes the GR pipeline from three perspectives: codebook, training, and inference.
arXiv Detail & Related papers (2026-01-27T11:39:02Z) - Latent-Space Contrastive Reinforcement Learning for Stable and Efficient LLM Reasoning [16.244366307890832]
We propose textbfDeepLatent Reasoning (DLR), a latent-space bidirectional contrastive reinforcement learning framework.<n>This framework shifts the trial-and-error cost from expensive token-level full sequence generation to the continuous latent manifold.<n> Experiments demonstrate that DLR achieves more stable training convergence, supports longer-horizon reasoning chains, and facilitates the sustainable accumulation of reasoning capabilities.
arXiv Detail & Related papers (2026-01-24T03:18:22Z) - PRISM: Purified Representation and Integrated Semantic Modeling for Generative Sequential Recommendation [28.629759086187352]
We propose a novel generative recommendation framework, PRISM, with Purified Representation and Integrated Semantic Modeling.<n>PRISM consistently outperforms state-of-the-art baselines across four real-world datasets.
arXiv Detail & Related papers (2026-01-23T08:50:16Z) - Reinforced Efficient Reasoning via Semantically Diverse Exploration [73.41112984160992]
Reinforcement learning with verifiable rewards (RLVR) has proven effective in enhancing the reasoning of large language models (LLMs)<n>We propose reinforced efficient reasoning via semantically diverse explorations, i.e., ROSE, for LLMs.<n>Our method incorporates a semantic-entropy-based branching strategy and an $varepsilon$-exploration mechanism.
arXiv Detail & Related papers (2026-01-08T15:56:44Z) - Idea-Gated Transformers: Enforcing Semantic Coherence via Differentiable Vocabulary Pruning [0.40611352512781856]
We introduce the Idea-Gated Transformer, a novel architecture that separates semantic planning from syntactic generation.<n>We propose a differentiable gating mechanism that suppresses semantically irrelevant tokens, effectively pruning the search space in real-time.
arXiv Detail & Related papers (2025-12-03T01:17:07Z) - Geometrically-Constrained Agent for Spatial Reasoning [53.93718394870856]
Vision Language Models exhibit a fundamental semantic-to-geometric gap in spatial reasoning.<n>Current paradigms fail to bridge this gap.<n>We propose a training-free agentic paradigm that resolves this gap by introducing a formal task constraint.
arXiv Detail & Related papers (2025-11-27T17:50:37Z) - LIHE: Linguistic Instance-Split Hyperbolic-Euclidean Framework for Generalized Weakly-Supervised Referring Expression Comprehension [42.52759428579815]
Existing Weakly-Supervised Referring Expression (WREC) methods are fundamentally limited by a one-to-one mapping assumption.<n>We introduce the Weakly-Supervised Generalized Referring Expression task (WGREC), a more practical paradigm that handles expressions with variable numbers of referents.<n>We propose a novel WGREC framework named Linguistic Instance-Split-Euclidean (LIHE), which operates in two stages.
arXiv Detail & Related papers (2025-11-15T04:06:57Z) - DiffGRM: Diffusion-based Generative Recommendation Model [63.35379395455103]
Generative recommendation (GR) is an emerging paradigm that represents each item via a tokenizer as an n-digit semantic ID (SID)<n>We propose DiffGRM, a diffusion-based GR model that replaces the autoregressive decoder with a masked discrete diffusion model (MDM)<n> Experiments show consistent gains over strong generative and discriminative recommendation baselines on multiple datasets.
arXiv Detail & Related papers (2025-10-21T03:23:32Z) - HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs [29.735089231891305]
HiD-VAE is a novel framework that learns hierarchically disentangled item representations through two core innovations.<n>First, HiD-VAE pioneers a hierarchically-supervised quantization process that aligns discrete codes with multi-level item tags.<n>Second, to combat representation entanglement, HiD-VAE incorporates a novel uniqueness loss that directly penalizes latent space overlap.
arXiv Detail & Related papers (2025-08-06T16:45:05Z) - ConciseHint: Boosting Efficient Reasoning via Continuous Concise Hints during Generation [74.37307916314407]
We propose a framework dubbed ConciseHint, which continuously encourages the reasoning model to speak concisely.<n>Experiments on the state-of-the-art LRMs, including DeepSeek-R1 and Qwen-3 series, demonstrate that our method can effectively produce concise reasoning.
arXiv Detail & Related papers (2025-06-23T16:20:44Z) - Reinforced Latent Reasoning for LLM-based Recommendation [92.56166822197919]
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities in complex problem-solving tasks.<n>Existing methods typically rely on fine-tuning with explicit chain-of-thought (CoT) data.<n>In this work, we explore an alternative approach that shifts from explicit CoT reasoning to compact, information-dense latent reasoning.
arXiv Detail & Related papers (2025-05-25T11:03:45Z) - Retrieval is Not Enough: Enhancing RAG Reasoning through Test-Time Critique and Optimization [58.390885294401066]
Retrieval-augmented generation (RAG) has become a widely adopted paradigm for enabling knowledge-grounded large language models (LLMs)<n>RAG pipelines often fail to ensure that model reasoning remains consistent with the evidence retrieved, leading to factual inconsistencies or unsupported conclusions.<n>We propose AlignRAG, a novel iterative framework grounded in Critique-Driven Alignment (CDA)<n>We introduce AlignRAG-auto, an autonomous variant that dynamically terminates refinement, removing the need to pre-specify the number of critique iterations.
arXiv Detail & Related papers (2025-04-21T04:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.