CARTS: Collaborative Agents for Recommendation Textual Summarization
- URL: http://arxiv.org/abs/2506.17765v2
- Date: Tue, 01 Jul 2025 05:47:05 GMT
- Title: CARTS: Collaborative Agents for Recommendation Textual Summarization
- Authors: Jiao Chen, Kehui Yao, Reza Yousefi Maragheh, Kai Zhao, Jianpeng Xu, Jason Cho, Evren Korpeoglu, Sushant Kumar, Kannan Achan,
- Abstract summary: CARTS is a multi-agent framework designed for structured summarization in recommendation systems.<n>It decomposes the task into three stages-Generation Augmented Generation, refinement circle, and arbitration.<n>It delivers higher title relevance and improved user engagement metrics.
- Score: 14.417465931316066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current recommendation systems often require some form of textual data summarization, such as generating concise and coherent titles for product carousels or other grouped item displays. While large language models have shown promise in NLP domains for textual summarization, these approaches do not directly apply to recommendation systems, where explanations must be highly relevant to the core features of item sets, adhere to strict word limit constraints. In this paper, we propose CARTS (Collaborative Agents for Recommendation Textual Summarization), a multi-agent LLM framework designed for structured summarization in recommendation systems. CARTS decomposes the task into three stages-Generation Augmented Generation (GAG), refinement circle, and arbitration, where successive agent roles are responsible for extracting salient item features, iteratively refining candidate titles based on relevance and length feedback, and selecting the final title through a collaborative arbitration process. Experiments on large-scale e-commerce data and live A/B testing show that CARTS significantly outperforms single-pass and chain-of-thought LLM baselines, delivering higher title relevance and improved user engagement metrics.
Related papers
- Generative Product Recommendations for Implicit Superlative Queries [21.750990820244983]
In Recommender Systems, users often seek the best products through indirect, vague, or under-specified queries, such as "best shoes for trail running"<n>We investigate how Large Language Models can generate implicit attributes for ranking as well as reason over them to improve product recommendations for such queries.
arXiv Detail & Related papers (2025-04-26T00:05:47Z) - Bridging Textual-Collaborative Gap through Semantic Codes for Sequential Recommendation [91.13055384151897]
CCFRec is a novel Code-based textual and Collaborative semantic Fusion method for sequential Recommendation.<n>We generate fine-grained semantic codes from multi-view text embeddings through vector quantization techniques.<n>In order to further enhance the fusion of textual and collaborative semantics, we introduce an optimization strategy.
arXiv Detail & Related papers (2025-03-15T15:54:44Z) - REGEN: A Dataset and Benchmarks with Natural Language Critiques and Narratives [4.558818396613368]
We extend the Amazon Product Reviews dataset by inpainting two key natural language features.<n>The narratives include product endorsements, purchase explanations, and summaries of user preferences.
arXiv Detail & Related papers (2025-03-14T23:47:46Z) - Breaking the Clusters: Uniformity-Optimization for Text-Based Sequential Recommendation [17.042627742322427]
Traditional sequential recommendation methods rely on explicit item IDs to capture user preferences over time.<n>Recent studies have shifted towards leveraging text-only information for recommendation.<n>We propose UniT, a framework that employs three pairwise item sampling strategies.
arXiv Detail & Related papers (2025-02-19T08:35:28Z) - Knowledge-Enhanced Conversational Recommendation via Transformer-based Sequential Modelling [58.681146735761224]
We first propose a Transformer-based sequential conversational recommendation method, named TSCR, to model the sequential dependencies in the conversations.<n>We then propose a knowledge graph enhanced version of TSCR, called TSCRKG.<n> Experimental results demonstrate that our TSCR model significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-12-03T12:20:56Z) - Learning Partially Aligned Item Representation for Cross-Domain Sequential Recommendation [72.73379646418435]
Cross-domain sequential recommendation aims to uncover and transfer users' sequential preferences across domains.
misaligned item representations can potentially lead to sub-optimal sequential modeling and user representation alignment.
We propose a model-agnostic framework called textbfCross-domain item representation textbfAlignment for textbfCross-textbfDomain textbfSequential textbfRecommendation.
arXiv Detail & Related papers (2024-05-21T03:25:32Z) - Reindex-Then-Adapt: Improving Large Language Models for Conversational Recommendation [50.19602159938368]
Large language models (LLMs) are revolutionizing conversational recommender systems.
We propose a Reindex-Then-Adapt (RTA) framework, which converts multi-token item titles into single tokens within LLMs.
Our framework demonstrates improved accuracy metrics across three different conversational recommendation datasets.
arXiv Detail & Related papers (2024-05-20T15:37:55Z) - Learnable Item Tokenization for Generative Recommendation [78.30417863309061]
We propose LETTER (a LEarnable Tokenizer for generaTivE Recommendation), which integrates hierarchical semantics, collaborative signals, and code assignment diversity.
LETTER incorporates Residual Quantized VAE for semantic regularization, a contrastive alignment loss for collaborative regularization, and a diversity loss to mitigate code assignment bias.
arXiv Detail & Related papers (2024-05-12T15:49:38Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical
Supervision from Extractive Summaries [46.183289748907804]
We propose SOE, a pipelined system that outlines, outlining and elaborating for long text generation.
SOE produces long texts with significantly better quality, along with faster convergence speed.
arXiv Detail & Related papers (2020-10-14T13:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.