GCOF: Self-iterative Text Generation for Copywriting Using Large
Language Model
- URL: http://arxiv.org/abs/2402.13667v1
- Date: Wed, 21 Feb 2024 09:59:20 GMT
- Title: GCOF: Self-iterative Text Generation for Copywriting Using Large
Language Model
- Authors: Jianghui Zhou, Ya Gao, Jie Liu, Xuemin Zhao, Zhaohua Yang, Yue Wu,
Lirong Shi
- Abstract summary: Large language models such as ChatGPT have substantially simplified the generation of marketing copy.
We introduce the Genetic Copy Optimization Framework (GCOF) designed to enhance both efficiency and engagememnt of marketing copy creation.
Online results indicate that copy produced by our framework achieves an average increase in click-through rate (CTR) of over $50%$.
- Score: 6.439245424286433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models(LLM) such as ChatGPT have substantially simplified the
generation of marketing copy, yet producing content satisfying domain specific
requirements, such as effectively engaging customers, remains a significant
challenge. In this work, we introduce the Genetic Copy Optimization Framework
(GCOF) designed to enhance both efficiency and engagememnt of marketing copy
creation. We conduct explicit feature engineering within the prompts of LLM.
Additionally, we modify the crossover operator in Genetic Algorithm (GA),
integrating it into the GCOF to enable automatic feature engineering. This
integration facilitates a self-iterative refinement of the marketing copy.
Compared to human curated copy, Online results indicate that copy produced by
our framework achieves an average increase in click-through rate (CTR) of over
$50\%$.
Related papers
- LLM-Ref: Enhancing Reference Handling in Technical Writing with Large Language Models [4.1180254968265055]
We present LLM-Ref, a writing assistant tool that aids researchers in writing articles from multiple source documents.
Unlike traditional RAG systems that use chunking and indexing, our tool retrieves and generates content directly from text paragraphs.
Our approach achieves a $3.25times$ to $6.26times$ increase in Ragas score, a comprehensive metric that provides a holistic view of a RAG system's ability to produce accurate, relevant, and contextually appropriate responses.
arXiv Detail & Related papers (2024-11-01T01:11:58Z) - Language Models are Graph Learners [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs)
We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - UniGen: A Unified Framework for Textual Dataset Generation Using Large Language Models [88.16197692794707]
UniGen is a comprehensive framework designed to produce diverse, accurate, and highly controllable datasets.
To augment data diversity, UniGen incorporates an attribute-guided generation module and a group checking feature.
Extensive experiments demonstrate the superior quality of data generated by UniGen.
arXiv Detail & Related papers (2024-06-27T07:56:44Z) - Adaptable Logical Control for Large Language Models [68.27725600175013]
Ctrl-G is an adaptable framework that facilitates tractable and flexible control of model generation at inference time.
We show that Ctrl-G, when applied to a TULU2-7B model, outperforms GPT3.5 and GPT4 on the task of interactive text editing.
arXiv Detail & Related papers (2024-06-19T23:47:59Z) - RAG-Enhanced Commit Message Generation [8.858678357308726]
Commit Message Generation has become a research hotspot.
It is time-consuming to write commit messages manually.
This paper proposes REACT, a REtrieval-Augmented framework for CommiT message generation.
arXiv Detail & Related papers (2024-06-08T16:24:24Z) - FOCUS: Forging Originality through Contrastive Use in Self-Plagiarism for Language Models [38.76912842622624]
Pre-trained Language Models (PLMs) have shown impressive results in various Natural Language Generation (NLG) tasks.
This study introduces a unique "self-plagiarism" contrastive decoding strategy, aimed at boosting the originality of text produced by PLMs.
arXiv Detail & Related papers (2024-06-02T19:17:00Z) - Generating Attractive and Authentic Copywriting from Customer Reviews [7.159225692930055]
We propose to generate copywriting based on customer reviews, as they provide firsthand practical experiences with products.
We have developed a sequence-to-sequence framework, enhanced with reinforcement learning, to produce copywriting that is attractive, authentic, and rich in information.
Our framework outperforms all existing baseline and zero-shot large language models, including LLaMA-2-chat-7B and GPT-3.5.
arXiv Detail & Related papers (2024-04-22T06:33:28Z) - A Simple but Effective Approach to Improve Structured Language Model
Output for Information Extraction [11.165093163378152]
Large language models (LLMs) have demonstrated impressive abilities in generating unstructured natural language according to instructions.
This paper introduces an efficient method, G&O, to enhance their structured text generation capabilities.
arXiv Detail & Related papers (2024-02-20T20:42:02Z) - Connecting Large Language Models with Evolutionary Algorithms Yields
Powerful Prompt Optimizers [70.18534453485849]
EvoPrompt is a framework for discrete prompt optimization.
It borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence.
It significantly outperforms human-engineered prompts and existing methods for automatic prompt generation.
arXiv Detail & Related papers (2023-09-15T16:50:09Z) - Stay on topic with Classifier-Free Guidance [57.28934343207042]
We show that CFG can be used broadly as an inference-time technique in pure language modeling.
We show that CFG improves the performance of Pythia, GPT-2 and LLaMA-family models across an array of tasks.
arXiv Detail & Related papers (2023-06-30T17:07:02Z) - Tractable Control for Autoregressive Language Generation [82.79160918147852]
We propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models.
We show that GeLaTo achieves state-of-the-art performance on challenging benchmarks for constrained text generation.
Our work opens up new avenues for controlling large language models and also motivates the development of more expressive TPMs.
arXiv Detail & Related papers (2023-04-15T00:19:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.