CREATER: CTR-driven Advertising Text Generation with Controlled
Pre-Training and Contrastive Fine-Tuning
- URL: http://arxiv.org/abs/2205.08943v1
- Date: Wed, 18 May 2022 14:17:04 GMT
- Title: CREATER: CTR-driven Advertising Text Generation with Controlled
Pre-Training and Contrastive Fine-Tuning
- Authors: Penghui Wei, Xuanhua Yang, Shaoguo Liu, Liang Wang, Bo Zheng
- Abstract summary: We propose CREATER, a CTR-driven advertising text generation approach, to generate ad texts based on high-quality user reviews.
To incorporate CTR objective, our model learns from online A/B test data with contrastive learning, which encourages the model to generate ad texts that obtain higher CTR.
Experiments on industrial datasets show that CREATER significantly outperforms current approaches.
- Score: 14.912117221662054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper focuses on automatically generating the text of an ad, and the
goal is that the generated text can capture user interest for achieving higher
click-through rate (CTR). We propose CREATER, a CTR-driven advertising text
generation approach, to generate ad texts based on high-quality user reviews.
To incorporate CTR objective, our model learns from online A/B test data with
contrastive learning, which encourages the model to generate ad texts that
obtain higher CTR. To alleviate the low-resource issue, we design a customized
self-supervised objective reducing the gap between pre-training and
fine-tuning. Experiments on industrial datasets show that CREATER significantly
outperforms current approaches. It has been deployed online in a leading
advertising platform and brings uplift on core online metrics.
Related papers
- ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction [45.15127775876369]
Click-through rate (CTR) prediction has become increasingly indispensable for various Internet applications.
Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features.
We propose a novel model-agnostic framework (i.e., ClickPrompt) where we incorporate CTR models to generate interaction-aware soft prompts.
arXiv Detail & Related papers (2023-10-13T16:37:53Z) - Dont Add, dont Miss: Effective Content Preserving Generation from
Pre-Selected Text Spans [27.569687461395002]
Controlled Text Reduction (CTR) task isolates the text generation step within typical summarization-style tasks.
We introduce a high-quality, open-source CTR model that tackles two prior key limitations.
We substantially improve the silver training data quality via GPT-4 distillation.
arXiv Detail & Related papers (2023-10-13T11:28:02Z) - Boosting Punctuation Restoration with Data Generation and Reinforcement
Learning [70.26450819702728]
Punctuation restoration is an important task in automatic speech recognition (ASR)
The discrepancy between written punctuated texts and ASR texts limits the usability of written texts in training punctuation restoration systems for ASR texts.
This paper proposes a reinforcement learning method to exploit in-topic written texts and recent advances in large pre-trained generative language models to bridge this gap.
arXiv Detail & Related papers (2023-07-24T17:22:04Z) - DELTA: Dynamic Embedding Learning with Truncated Conscious Attention for
CTR Prediction [61.68415731896613]
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation.
We propose a model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction.
arXiv Detail & Related papers (2023-05-03T12:34:45Z) - X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic
Textual Guidance [70.08635216710967]
X-Mesh is a text-driven 3D stylization framework that incorporates a novel Text-guided Dynamic Attention Module.
We introduce a new standard text-mesh benchmark, MIT-30, and two automated metrics, which will enable future research to achieve fair and objective comparisons.
arXiv Detail & Related papers (2023-03-28T06:45:31Z) - On-Device Model Fine-Tuning with Label Correction in Recommender Systems [43.41875046295657]
This work focuses on the fundamental click-through rate (CTR) prediction task in recommender systems.
We propose a novel label correction method, which requires each user only to change the labels of the local samples ahead of on-device fine-tuning.
arXiv Detail & Related papers (2022-10-21T14:40:18Z) - Classifiers are Better Experts for Controllable Text Generation [63.17266060165098]
We show that the proposed method significantly outperforms recent PPLM, GeDi, and DExperts on PPL and sentiment accuracy based on the external classifier of generated texts.
The same time, it is also easier to implement and tune, and has significantly fewer restrictions and requirements.
arXiv Detail & Related papers (2022-05-15T12:58:35Z) - TSI: an Ad Text Strength Indicator using Text-to-CTR and
Semantic-Ad-Similarity [16.10904771281746]
We propose an ad text strength indicator (TSI) which: (i) predicts the click-through-rate (CTR) for an input ad text, (ii) fetches similar existing ads to create a neighborhood around the input ad, and compares the predicted CTRs in the neighborhood to declare whether the input ad is strong or weak.
As suggestions for ad text improvement, TSI shows anonymized versions of superior ads (higher predicted CTR) in the neighborhood.
arXiv Detail & Related papers (2021-08-18T16:24:40Z) - Learning to Create Better Ads: Generation and Ranking Approaches for Ad
Creative Refinement [26.70647666598025]
We study approaches to refine the given ad text and image by: (i) generating new ad text, (ii) recommending keyphrases for new ad text, and (iii) recommending image tags (objects in image)
Based on A/B tests conducted by multiple advertisers, we form pairwise examples of inferior and superior ad creatives.
We also share broadly applicable insights from our experiments using data from the Yahoo Gemini ad platform.
arXiv Detail & Related papers (2020-08-17T16:46:28Z) - AliExpress Learning-To-Rank: Maximizing Online Model Performance without
Going Online [60.887637616379926]
This paper proposes an evaluator-generator framework for learning-to-rank.
It consists of an evaluator that generalizes to evaluate recommendations involving the context, and a generator that maximizes the evaluator score by reinforcement learning.
Our method achieves a significant improvement in terms of Conversion Rate (CR) over the industrial-level fine-tuned model in online A/B tests.
arXiv Detail & Related papers (2020-03-25T10:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.