LLM-driven Constrained Copy Generation through Iterative Refinement
- URL: http://arxiv.org/abs/2504.10391v1
- Date: Mon, 14 Apr 2025 16:38:28 GMT
- Title: LLM-driven Constrained Copy Generation through Iterative Refinement
- Authors: Varun Vasudevan, Faezeh Akhavizadegan, Abhinav Prakash, Yokila Arora, Jason Cho, Tanya Mendiratta, Sushant Kumar, Kannan Achan,
- Abstract summary: We propose an end-to-end framework for scalable copy generation using iterative refinement.<n> Examples of these constraints include length, topics, keywords, preferred lexical ordering, and tone of voice.<n>Our results show that iterative refinement increases the copy success rate by $16.25-35.91$% across use cases.
- Score: 8.297656135501395
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Crafting a marketing message (copy), or copywriting is a challenging generation task, as the copy must adhere to various constraints. Copy creation is inherently iterative for humans, starting with an initial draft followed by successive refinements. However, manual copy creation is time-consuming and expensive, resulting in only a few copies for each use case. This limitation restricts our ability to personalize content to customers. Contrary to the manual approach, LLMs can generate copies quickly, but the generated content does not consistently meet all the constraints on the first attempt (similar to humans). While recent studies have shown promise in improving constrained generation through iterative refinement, they have primarily addressed tasks with only a few simple constraints. Consequently, the effectiveness of iterative refinement for tasks such as copy generation, which involves many intricate constraints, remains unclear. To address this gap, we propose an LLM-based end-to-end framework for scalable copy generation using iterative refinement. To the best of our knowledge, this is the first study to address multiple challenging constraints simultaneously in copy generation. Examples of these constraints include length, topics, keywords, preferred lexical ordering, and tone of voice. We demonstrate the performance of our framework by creating copies for e-commerce banners for three different use cases of varying complexity. Our results show that iterative refinement increases the copy success rate by $16.25-35.91$% across use cases. Furthermore, the copies generated using our approach outperformed manually created content in multiple pilot studies using a multi-armed bandit framework. The winning copy improved the click-through rate by $38.5-45.21$%.
Related papers
- Language Models can Self-Lengthen to Generate Long Texts [74.96074422345806]
This paper introduces an innovative iterative training framework called Self-Lengthen.
It leverages only the intrinsic knowledge and skills of Large Language Models without the need for auxiliary data or proprietary models.
Experiments on benchmarks and human evaluations show that Self-Lengthen outperforms existing methods in long-text generation.
arXiv Detail & Related papers (2024-10-31T13:47:10Z) - Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models [79.70436109672599]
We derive non-vacuous generalization bounds for large language models as large as LLaMA2-70B.
Our work achieves the first non-vacuous bounds for models that are deployed in practice and generate high-quality text.
arXiv Detail & Related papers (2024-07-25T16:13:58Z) - CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation [132.00910067533982]
We introduce CopyBench, a benchmark designed to measure both literal and non-literal copying in LM generations.
We find that, although literal copying is relatively rare, two types of non-literal copying -- event copying and character copying -- occur even in models as small as 7B parameters.
arXiv Detail & Related papers (2024-07-09T17:58:18Z) - Generating Attractive and Authentic Copywriting from Customer Reviews [7.159225692930055]
We propose to generate copywriting based on customer reviews, as they provide firsthand practical experiences with products.
We have developed a sequence-to-sequence framework, enhanced with reinforcement learning, to produce copywriting that is attractive, authentic, and rich in information.
Our framework outperforms all existing baseline and zero-shot large language models, including LLaMA-2-chat-7B and GPT-3.5.
arXiv Detail & Related papers (2024-04-22T06:33:28Z) - Copy Is All You Need [66.00852205068327]
We formulate text generation as progressively copying text segments from an existing text collection.
Our approach achieves better generation quality according to both automatic and human evaluations.
Our approach attains additional performance gains by simply scaling up to larger text collections.
arXiv Detail & Related papers (2023-07-13T05:03:26Z) - Tractable Control for Autoregressive Language Generation [82.79160918147852]
We propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models.
We show that GeLaTo achieves state-of-the-art performance on challenging benchmarks for constrained text generation.
Our work opens up new avenues for controlling large language models and also motivates the development of more expressive TPMs.
arXiv Detail & Related papers (2023-04-15T00:19:44Z) - AutoTemplate: A Simple Recipe for Lexically Constrained Text Generation [2.7763177595791655]
We introduce AutoTemplate, a simple yet effective lexically constrained text generation framework.
We conduct experiments on two tasks: keywords-to-sentence generations and entity-guided summarization.
Experimental results show that the AutoTemplate outperforms the competitive baselines on both tasks.
arXiv Detail & Related papers (2022-11-15T18:36:18Z) - May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy
Method for Natural Language Generation [1.2453219864236247]
We propose a novel supervised approach of a copy network that helps the model decide which words need to be copied and which need to be generated.
Specifically, we re-define the objective function, which leverages source sequences and target vocabularies as guidance for copying.
The experimental results on data-to-text generation and abstractive summarization tasks verify that our approach enhances the copying quality and improves the degree of abstractness.
arXiv Detail & Related papers (2021-12-20T06:54:28Z) - CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence
Models [31.832217465573503]
We present a model with an explicit token-level copy operation and extend it to copying entire spans.
Our model provides hard alignments between spans in the input and output, allowing for nontraditional applications of seq2seq, like information extraction.
arXiv Detail & Related papers (2020-10-28T22:45:16Z) - Copy that! Editing Sequences by Copying Spans [40.23377412674599]
We present an extension of seq2seq models capable of copying entire spans of the input to the output in one step.
In experiments on a range of editing tasks of natural language and source code, we show that our new model consistently outperforms simpler baselines.
arXiv Detail & Related papers (2020-06-08T17:42:18Z) - Token Manipulation Generative Adversarial Network for Text Generation [0.0]
We decompose conditional text generation problem into two tasks, make-a-blank and fill-in-the-blank, and extend the former to handle more complex manipulations on the given tokens.
We show that the proposed model not only addresses the limitations but also provides good results without compromising the performance in terms of quality and diversity.
arXiv Detail & Related papers (2020-05-06T13:10:43Z) - POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training [93.79766670391618]
We present POINTER, a novel insertion-based approach for hard-constrained text generation.
The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner.
The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable.
arXiv Detail & Related papers (2020-05-01T18:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.