CRMAgent: A Multi-Agent LLM System for E-Commerce CRM Message Template Generation
- URL: http://arxiv.org/abs/2507.08325v1
- Date: Fri, 11 Jul 2025 05:31:35 GMT
- Title: CRMAgent: A Multi-Agent LLM System for E-Commerce CRM Message Template Generation
- Authors: Yinzhu Quan, Xinrui Li, Ying Chen,
- Abstract summary: We introduce CRMAgent, a multi-agent system built on large language models (LLMs)<n>Group-based learning enables the agent to learn from a merchant's own top-performing messages within the same audience segment.<n>Retrieval-and-adaptation fetches templates that share the same audience segment and exhibit high similarity in voucher type and product category.<n>Third, a rule-based fallback provides a lightweight zero-shot rewrite when no suitable references are available.
- Score: 4.322428195823703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In e-commerce private-domain channels such as instant messaging and e-mail, merchants engage customers directly as part of their Customer Relationship Management (CRM) programmes to drive retention and conversion. While a few top performers excel at crafting outbound messages, most merchants struggle to write persuasive copy because they lack both expertise and scalable tools. We introduce CRMAgent, a multi-agent system built on large language models (LLMs) that generates high-quality message templates and actionable writing guidance through three complementary modes. First, group-based learning enables the agent to learn from a merchant's own top-performing messages within the same audience segment and rewrite low-performing ones. Second, retrieval-and-adaptation fetches templates that share the same audience segment and exhibit high similarity in voucher type and product category, learns their successful patterns, and adapts them to the current campaign. Third, a rule-based fallback provides a lightweight zero-shot rewrite when no suitable references are available. Extensive experiments show that CRMAgent consistently outperforms merchants' original templates, delivering significant gains in both audience-match and marketing-effectiveness metrics.
Related papers
- Grounded Persuasive Language Generation for Automated Marketing [59.175257431078435]
This paper develops an agentic framework that employs large language models (LLMs) to automate the generation of persuasive and grounded marketing content.<n>Our method is designed to align the generated content with user preferences while highlighting useful factual attributes.<n>We conduct systematic human-subject experiments in the domain of real estate marketing, with a focus group of potential house buyers.
arXiv Detail & Related papers (2025-02-24T03:36:57Z) - Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model [8.604654904400027]
We introduce CoALM (Conversational Agentic Language Model), a unified approach that integrates both conversational and agentic capabilities.<n>Using CoALM-IT, we train three models CoALM 8B, CoALM 70B, and CoALM 405B, which outperform top domain-specific models.
arXiv Detail & Related papers (2025-02-12T22:18:34Z) - CTR-Driven Advertising Image Generation with Multimodal Large Language Models [53.40005544344148]
We explore the use of Multimodal Large Language Models (MLLMs) for generating advertising images by optimizing for Click-Through Rate (CTR) as the primary objective.<n>To further improve the CTR of generated images, we propose a novel reward model to fine-tune pre-trained MLLMs through Reinforcement Learning (RL)<n>Our method achieves state-of-the-art performance in both online and offline metrics.
arXiv Detail & Related papers (2025-02-05T09:06:02Z) - CLIPErase: Efficient Unlearning of Visual-Textual Associations in CLIP [57.49519639951552]
We introduce CLIPErase, a novel approach that disentangles and selectively forgets both visual and textual associations.<n>Experiments on the CIFAR-100 and Flickr30K datasets demonstrate that CLIPErase effectively forgets designated associations in zero-shot tasks for multimodal samples.
arXiv Detail & Related papers (2024-10-30T17:51:31Z) - FOCUS: Forging Originality through Contrastive Use in Self-Plagiarism for Language Models [38.76912842622624]
Pre-trained Language Models (PLMs) have shown impressive results in various Natural Language Generation (NLG) tasks.
This study introduces a unique "self-plagiarism" contrastive decoding strategy, aimed at boosting the originality of text produced by PLMs.
arXiv Detail & Related papers (2024-06-02T19:17:00Z) - Generating Attractive and Authentic Copywriting from Customer Reviews [7.159225692930055]
We propose to generate copywriting based on customer reviews, as they provide firsthand practical experiences with products.
We have developed a sequence-to-sequence framework, enhanced with reinforcement learning, to produce copywriting that is attractive, authentic, and rich in information.
Our framework outperforms all existing baseline and zero-shot large language models, including LLaMA-2-chat-7B and GPT-3.5.
arXiv Detail & Related papers (2024-04-22T06:33:28Z) - MIMIR: A Streamlined Platform for Personalized Agent Tuning in Domain Expertise [49.83486066403154]
textscMimir is a streamlined platform offering a customizable pipeline for personalized agent tuning.
textscMimir supports the generation of general instruction-tuning datasets from the same input.
textscMimir integrates these features into a cohesive end-to-end platform, facilitating everything from the uploading of personalized files to one-click agent fine-tuning.
arXiv Detail & Related papers (2024-04-03T23:42:38Z) - A Multimodal In-Context Tuning Approach for E-Commerce Product
Description Generation [47.70824723223262]
We propose a new setting for generating product descriptions from images, augmented by marketing keywords.
We present a simple and effective Multimodal In-Context Tuning approach, named ModICT, which introduces a similar product sample as the reference.
Experiments demonstrate that ModICT significantly improves the accuracy (by up to 3.3% on Rouge-L) and diversity (by up to 9.4% on D-5) of generated results compared to conventional methods.
arXiv Detail & Related papers (2024-02-21T07:38:29Z) - FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR Prediction [49.510163437116645]
Click-through rate (CTR) prediction plays as a core function module in personalized online services.
Traditional ID-based models for CTR prediction take as inputs the one-hot encoded ID features of tabular modality.
Pretrained Language Models(PLMs) has given rise to another paradigm, which takes as inputs the sentences of textual modality.
We propose to conduct Fine-grained feature-level ALignment between ID-based Models and Pretrained Language Models(FLIP) for CTR prediction.
arXiv Detail & Related papers (2023-10-30T11:25:03Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z) - Continuous Prompt Tuning Based Textual Entailment Model for E-commerce
Entity Typing [12.77583836715184]
Rapid activity in e-commerce has led to the rapid emergence of new entities, which is difficult to be solved by general entity typing.
We propose our textual entailment model with continuous prompt tuning based hypotheses and fusion embeddings for e-commerce entity typing.
We show our proposed model improves the average F1 score by around 2% compared to the baseline BERT entity typing model.
arXiv Detail & Related papers (2022-11-04T14:20:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.