Bringing Reasoning to Generative Recommendation Through the Lens of Cascaded Ranking
- URL: http://arxiv.org/abs/2602.03692v2
- Date: Wed, 04 Feb 2026 09:13:38 GMT
- Title: Bringing Reasoning to Generative Recommendation Through the Lens of Cascaded Ranking
- Authors: Xinyu Lin, Pengyuan Liu, Wenjie Wang, Yicheng Hu, Chen Xu, Fuli Feng, Qifan Wang, Tat-Seng Chua,
- Abstract summary: Generative Recommendation (GR) has become a promising end-to-end approach with high FLOPS utilization for resource-efficient recommendation.<n>We show that current GR models suffer from a critical textbfbias amplification issue, where token-level bias escalates as token generation progresses.<n>To combat the bias amplification issue, it is crucial for GR to 1) incorporate more heterogeneous information, and 2) allocate greater computational resources at each token generation step.
- Score: 107.09842504618369
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Recommendation (GR) has become a promising end-to-end approach with high FLOPS utilization for resource-efficient recommendation. Despite the effectiveness, we show that current GR models suffer from a critical \textbf{bias amplification} issue, where token-level bias escalates as token generation progresses, ultimately limiting the recommendation diversity and hurting the user experience. By comparing against the key factor behind the success of traditional multi-stage pipelines, we reveal two limitations in GR that can amplify the bias: homogeneous reliance on the encoded history, and fixed computational budgets that prevent deeper user preference understanding. To combat the bias amplification issue, it is crucial for GR to 1) incorporate more heterogeneous information, and 2) allocate greater computational resources at each token generation step. To this end, we propose CARE, a simple yet effective cascaded reasoning framework for debiased GR. To incorporate heterogeneous information, we introduce a progressive history encoding mechanism, which progressively incorporates increasingly fine-grained history information as the generation process advances. To allocate more computations, we propose a query-anchored reasoning mechanism, which seeks to perform a deeper understanding of historical information through parallel reasoning steps. We instantiate CARE on three GR backbones. Empirical results on four datasets show the superiority of CARE in recommendation accuracy, diversity, efficiency, and promising scalability. The codes and datasets are available at https://github.com/Linxyhaha/CARE.
Related papers
- Beyond the Flat Sequence: Hierarchical and Preference-Aware Generative Recommendations [35.58864660038236]
We propose a novel framework named HPGR (Hierarchical and Preference-aware Generative Recommender)<n>First, a structure-aware pre-training stage employs a session-based Masked Item Modeling objective to learn a hierarchically-informed and semantically rich item representation space.<n>Second, a preference-aware fine-tuning stage leverages these powerful representations to implement a Preference-Guided Sparse Attention mechanism.
arXiv Detail & Related papers (2026-03-01T08:15:34Z) - SimGR: Escaping the Pitfalls of Generative Decoding in LLM-based Recommendation [68.00727783181289]
A core objective in recommender systems is to accurately model the distribution of user preferences over items to enable personalized recommendations.<n>We observe that existing methods inevitably introduce systematic bias when estimating item-level preference distributions.<n>We propose textbfSimply textbfGenerative textbfRecommendation (textbfSimGR), a framework that directly models item-level preference distributions in a shared latent space.
arXiv Detail & Related papers (2026-02-08T07:26:52Z) - TeaRAG: A Token-Efficient Agentic Retrieval-Augmented Generation Framework [62.66056331998838]
TeaRAG is a token-efficient agentic RAG framework capable of compressing both retrieval content and reasoning steps.<n>Our reward function evaluates the knowledge sufficiency by a knowledge matching mechanism, while penalizing excessive reasoning steps.
arXiv Detail & Related papers (2025-11-07T16:08:34Z) - Does Generative Retrieval Overcome the Limitations of Dense Retrieval? [117.49076770892583]
Generative retrieval (GR) has emerged as a new paradigm in neural information retrieval.<n>We investigate how GR fundamentally diverges from dense retrieval (DR) in both learning objectives and representational capacity.
arXiv Detail & Related papers (2025-09-26T09:38:01Z) - GFlowGR: Fine-tuning Generative Recommendation Frameworks with Generative Flow Networks [36.39484385717512]
This paper treats the Generative Recommendations (GR) as a multi-step generation task and constructs a GFlowNets-based fine-tuning framework (GFlowGR)<n>The proposed framework integrates collaborative knowledge from traditional recommender systems to create an adaptive trajectory sampler and a comprehensive reward model.
arXiv Detail & Related papers (2025-06-19T08:04:31Z) - Reinforcing Video Reasoning with Focused Thinking [65.85683941058916]
We propose TW-GRPO, a novel framework that enhances visual reasoning with focused thinking and dense reward granularity.<n>Specifically, we employ a token weighting mechanism that prioritizes tokens with high informational density.<n>We also reformulate RL training by shifting from single-choice to multi-choice QA tasks.
arXiv Detail & Related papers (2025-05-30T15:42:19Z) - Reinforced Informativeness Optimization for Long-Form Retrieval-Augmented Generation [77.10390725623125]
Long-form question answering (LFQA) presents unique challenges for large language models.<n>RioRAG is a novel reinforcement learning framework that advances long-form RAG through reinforced informativeness optimization.
arXiv Detail & Related papers (2025-05-27T07:34:41Z) - Constrained Auto-Regressive Decoding Constrains Generative Retrieval [71.71161220261655]
Generative retrieval seeks to replace traditional search index data structures with a single large-scale neural network.<n>In this paper, we examine the inherent limitations of constrained auto-regressive generation from two essential perspectives: constraints and beam search.
arXiv Detail & Related papers (2025-04-14T06:54:49Z) - HeteRAG: A Heterogeneous Retrieval-augmented Generation Framework with Decoupled Knowledge Representations [36.61614799098233]
Retrieval-augmented generation (RAG) methods can enhance the performance of LLMs.<n>Existing RAG methods typically employ identical representations of knowledge chunks for both retrieval and generation.<n>We propose a heterogeneous RAG framework (myname) that decouples the representations of knowledge chunks for retrieval and generation.
arXiv Detail & Related papers (2025-04-12T13:12:54Z) - A Distribution-Aware Flow-Matching for Generating Unstructured Data for Few-Shot Reinforcement Learning [1.0709300917082865]
We introduce a distribution-aware flow matching approach to generate synthetic unstructured data for few-shot reinforcement learning.<n>Our approach addresses key challenges in traditional model-based RL, such as overfitting and data correlation.<n>Results demonstrate that our method achieves stable convergence in terms of maximum Q-value while enhancing frame rates by 30% in the initial timestamps.
arXiv Detail & Related papers (2024-09-21T15:50:59Z) - Pistis-RAG: Enhancing Retrieval-Augmented Generation with Human Feedback [41.88662700261036]
RAG systems face limitations when semantic relevance alone does not guarantee improved generation quality.
We propose Pistis-RAG, a new RAG framework designed with a content-centric approach to better align LLMs with human preferences.
arXiv Detail & Related papers (2024-06-21T08:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.