Bi-Level Optimization for Generative Recommendation: Bridging Tokenization and Generation
- URL: http://arxiv.org/abs/2510.21242v1
- Date: Fri, 24 Oct 2025 08:25:56 GMT
- Title: Bi-Level Optimization for Generative Recommendation: Bridging Tokenization and Generation
- Authors: Yimeng Bai, Chang Liu, Yang Zhang, Dingxian Wang, Frank Yang, Andrew Rabinovich, Wenge Rong, Fuli Feng,
- Abstract summary: Generative recommendation is emerging as a transformative paradigm by directly generating recommended items, rather than relying on matching.<n>Existing approaches treat the tokenizer to derive suitable item identifiers and the recommender based on those identifiers separately.<n>We propose BLOGER, a Bi-Level Optimization for GEnerative Recommendation framework, which explicitly models the interdependence between the tokenizer and the recommender.
- Score: 42.09028908758132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative recommendation is emerging as a transformative paradigm by directly generating recommended items, rather than relying on matching. Building such a system typically involves two key components: (1) optimizing the tokenizer to derive suitable item identifiers, and (2) training the recommender based on those identifiers. Existing approaches often treat these components separately--either sequentially or in alternation--overlooking their interdependence. This separation can lead to misalignment: the tokenizer is trained without direct guidance from the recommendation objective, potentially yielding suboptimal identifiers that degrade recommendation performance. To address this, we propose BLOGER, a Bi-Level Optimization for GEnerative Recommendation framework, which explicitly models the interdependence between the tokenizer and the recommender in a unified optimization process. The lower level trains the recommender using tokenized sequences, while the upper level optimizes the tokenizer based on both the tokenization loss and recommendation loss. We adopt a meta-learning approach to solve this bi-level optimization efficiently, and introduce gradient surgery to mitigate gradient conflicts in the upper-level updates, thereby ensuring that item identifiers are both informative and recommendation-aligned. Extensive experiments on real-world datasets demonstrate that BLOGER consistently outperforms state-of-the-art generative recommendation methods while maintaining practical efficiency with no significant additional computational overhead, effectively bridging the gap between item tokenization and autoregressive generation.
Related papers
- FlashEvaluator: Expanding Search Space with Parallel Evaluation [27.839033452409428]
We propose FlashEvaluator, which enables cross-sequence token information sharing and processes all sequences in a single forward pass.<n>This yields sublinear computational complexity that improves the system's efficiency and supports direct inter-sequence comparisons.<n>FlashEvaluator has been deployed in online recommender system of Kuaishou, delivering substantial and sustained revenue gains in practice.
arXiv Detail & Related papers (2026-03-03T03:35:34Z) - RankGR: Rank-Enhanced Generative Retrieval with Listwise Direct Preference Optimization in Recommendation [36.297513746770456]
We propose RankGR, a Generative Retrieval method that incorporates listwise direct preference optimization for recommendation.<n>In IAP, we incorporate a novel listwise direct preference optimization strategy into GR, thus facilitating a more comprehensive understanding of the hierarchical user preferences.<n>We implement several practical improvements in training and deployment, ultimately achieving a real-time system capable of handling nearly ten thousand requests per second.
arXiv Detail & Related papers (2026-02-09T12:13:43Z) - Closing the Performance Gap in Generative Recommenders with Collaborative Tokenization and Efficient Modeling [10.757287948514604]
We introduce a contrastive tokenization method that integrates collaborative information directly into the learned item representations.<n>We also propose MARIUS, a lightweight, audio-inspired generative model that decouples timeline modeling from item decoding.
arXiv Detail & Related papers (2025-08-12T17:06:55Z) - On the Role of Feedback in Test-Time Scaling of Agentic AI Workflows [71.92083784393418]
Agentic AI (systems that autonomously plan and act) are becoming widespread, yet their task success rate on complex tasks remains low.<n>Inference-time alignment relies on three components: sampling, evaluation, and feedback.<n>We introduce Iterative Agent Decoding (IAD), a procedure that repeatedly inserts feedback extracted from different forms of critiques.
arXiv Detail & Related papers (2025-04-02T17:40:47Z) - Unleash LLMs Potential for Recommendation by Coordinating Twin-Tower Dynamic Semantic Token Generator [60.07198935747619]
We propose Twin-Tower Dynamic Semantic Recommender (T TDS), the first generative RS which adopts dynamic semantic index paradigm.
To be more specific, we for the first time contrive a dynamic knowledge fusion framework which integrates a twin-tower semantic token generator into the LLM-based recommender.
The proposed T TDS recommender achieves an average improvement of 19.41% in Hit-Rate and 20.84% in NDCG metric, compared with the leading baseline methods.
arXiv Detail & Related papers (2024-09-14T01:45:04Z) - Generative Recommender with End-to-End Learnable Item Tokenization [51.82768744368208]
We introduce ETEGRec, a novel End-To-End Generative Recommender that unifies item tokenization and generative recommendation into a cohesive framework.<n>ETEGRec consists of an item tokenizer and a generative recommender built on a dual encoder-decoder architecture.<n>We develop an alternating optimization technique to ensure stable and efficient end-to-end training of the entire framework.
arXiv Detail & Related papers (2024-09-09T12:11:53Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.