MLLM-Driven Semantic Identifier Generation for Generative Cross-Modal Retrieval
- URL: http://arxiv.org/abs/2509.17359v2
- Date: Mon, 03 Nov 2025 02:59:48 GMT
- Title: MLLM-Driven Semantic Identifier Generation for Generative Cross-Modal Retrieval
- Authors: Tianyuan Li, Lei Wang, Ahtamjan Ahmat, Yating Yang, Bo Ma, Rui Dong, Bangju Han,
- Abstract summary: We propose a vocabulary-efficient identifier generation framework that prompts MLLMs to generate Structured Semantic Identifiers from image-caption pairs.<n>These identifiers are composed of concept-level tokens such as objects and actions, naturally aligning with the model's generation space.<n>We also introduce a Rationale-Guided Supervision Strategy, prompting the model to produce a one-sentence explanation alongside each identifier.
- Score: 7.524529523498721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative cross-modal retrieval, which treats retrieval as a generation task, has emerged as a promising direction with the rise of Multimodal Large Language Models (MLLMs). In this setting, the model responds to a text query by generating an identifier corresponding to the target image. However, existing methods typically rely on manually crafted string IDs, clustering-based labels, or atomic identifiers requiring vocabulary expansion, all of which face challenges in semantic alignment or scalability.To address these limitations, we propose a vocabulary-efficient identifier generation framework that prompts MLLMs to generate Structured Semantic Identifiers from image-caption pairs. These identifiers are composed of concept-level tokens such as objects and actions, naturally aligning with the model's generation space without modifying the tokenizer. Additionally, we introduce a Rationale-Guided Supervision Strategy, prompting the model to produce a one-sentence explanation alongside each identifier serves as an auxiliary supervision signal that improves semantic grounding and reduces hallucinations during training.
Related papers
- AgenticTagger: Structured Item Representation for Recommendation with LLM Agents [58.12004213978182]
AgenticTagger is a framework that queries LLMs for representing items with sequences of text descriptors.<n>To effectively and efficiently ground vocabulary in the item corpus of interest, we design a multi-agent reflection mechanism.<n>Experiments on public and private data show AgenticTagger brings consistent improvements across diverse recommendation scenarios.
arXiv Detail & Related papers (2026-02-05T18:01:37Z) - Unleashing the Native Recommendation Potential: LLM-Based Generative Recommendation via Structured Term Identifiers [51.64398574262054]
This paper introduces Term IDs (TIDs), defined as a set of semantically rich and standardized textual keywords, to serve as robust item identifiers.<n>We propose GRLM, a novel framework centered on TIDs, to convert item's metadata into standardized TIDs and utilize Integrative Instruction Fine-tuning to collaboratively optimize term internalization and sequential recommendation.
arXiv Detail & Related papers (2026-01-11T07:53:20Z) - SemCORE: A Semantic-Enhanced Generative Cross-Modal Retrieval Framework with MLLMs [70.79124435220695]
We propose a novel unified Semantic-enhanced generative Cross-mOdal REtrieval framework (SemCORE)<n>We first construct a Structured natural language IDentifier (SID) that effectively aligns target identifiers with generative models optimized for natural language comprehension and generation.<n>We then introduce a Generative Semantic Verification (GSV) strategy enabling fine-grained target discrimination.
arXiv Detail & Related papers (2025-04-17T17:59:27Z) - Order-agnostic Identifier for Large Language Model-based Generative Recommendation [94.37662915542603]
Items are assigned identifiers for Large Language Models (LLMs) to encode user history and generate the next item.<n>Existing approaches leverage either token-sequence identifiers, representing items as discrete token sequences, or single-token identifiers, using ID or semantic embeddings.<n>We propose SETRec, which leverages semantic tokenizers to obtain order-agnostic multi-dimensional tokens.
arXiv Detail & Related papers (2025-02-15T15:25:38Z) - Generative Cross-Modal Retrieval: Memorizing Images in Multimodal
Language Models for Retrieval and Beyond [99.73306923465424]
We introduce a generative cross-modal retrieval framework, which assigns unique identifier strings to represent images.
By memorizing images in MLLMs, we introduce a new paradigm to cross-modal retrieval, distinct from previous discriminative approaches.
arXiv Detail & Related papers (2024-02-16T16:31:46Z) - Language Models As Semantic Indexers [78.83425357657026]
We introduce LMIndexer, a self-supervised framework to learn semantic IDs with a generative language model.
We show the high quality of the learned IDs and demonstrate their effectiveness on three tasks including recommendation, product search, and document retrieval.
arXiv Detail & Related papers (2023-10-11T18:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.