CREATER: CTR-driven Advertising Text Generation with Controlled
Pre-Training and Contrastive Fine-Tuning
- URL: http://arxiv.org/abs/2205.08943v1
- Date: Wed, 18 May 2022 14:17:04 GMT
- Title: CREATER: CTR-driven Advertising Text Generation with Controlled
Pre-Training and Contrastive Fine-Tuning
- Authors: Penghui Wei, Xuanhua Yang, Shaoguo Liu, Liang Wang, Bo Zheng
- Abstract summary: We propose CREATER, a CTR-driven advertising text generation approach, to generate ad texts based on high-quality user reviews.
To incorporate CTR objective, our model learns from online A/B test data with contrastive learning, which encourages the model to generate ad texts that obtain higher CTR.
Experiments on industrial datasets show that CREATER significantly outperforms current approaches.
- Score: 14.912117221662054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper focuses on automatically generating the text of an ad, and the
goal is that the generated text can capture user interest for achieving higher
click-through rate (CTR). We propose CREATER, a CTR-driven advertising text
generation approach, to generate ad texts based on high-quality user reviews.
To incorporate CTR objective, our model learns from online A/B test data with
contrastive learning, which encourages the model to generate ad texts that
obtain higher CTR. To alleviate the low-resource issue, we design a customized
self-supervised objective reducing the gap between pre-training and
fine-tuning. Experiments on industrial datasets show that CREATER significantly
outperforms current approaches. It has been deployed online in a leading
advertising platform and brings uplift on core online metrics.
Related papers
- SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation [55.61004653386632]
Large Language Models (LLMs) often produce hallucinations, i.e., information that is unfaithful or not grounded in the input context.
This paper introduces a novel self-supervised method for generating a training set of unfaithful samples.
We then refine the model using a training process that encourages the generation of grounded outputs over unfaithful ones.
arXiv Detail & Related papers (2025-02-19T12:31:58Z) - CTR-Driven Advertising Image Generation with Multimodal Large Language Models [53.40005544344148]
We explore the use of Multimodal Large Language Models (MLLMs) for generating advertising images by optimizing for Click-Through Rate (CTR) as the primary objective.
To further improve the CTR of generated images, we propose a novel reward model to fine-tune pre-trained MLLMs through Reinforcement Learning (RL)
Our method achieves state-of-the-art performance in both online and offline metrics.
arXiv Detail & Related papers (2025-02-05T09:06:02Z) - Efficient Transfer Learning Framework for Cross-Domain Click-Through Rate Prediction [47.7066461216227]
Efficient Transfer Learning Framework for Cross-Domain Click-Through Rate Prediction (E-CDCTR)
Three key components: Tiny Pre-training Model (TPM), Complete Pre-training Model (CPM) and Advertisement CTR model (A-CTR)
TPM provides richer representations of user and item for both the CPM and A-CTR, effectively alleviating the problem inherent in the daily updates.
arXiv Detail & Related papers (2024-08-29T03:34:39Z) - CELA: Cost-Efficient Language Model Alignment for CTR Prediction [70.65910069412944]
Click-Through Rate (CTR) prediction holds a paramount position in recommender systems.
Recent efforts have sought to mitigate these challenges by integrating Pre-trained Language Models (PLMs)
We propose textbfCost-textbfEfficient textbfLanguage Model textbfAlignment (textbfCELA) for CTR prediction.
arXiv Detail & Related papers (2024-05-17T07:43:25Z) - ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction [45.15127775876369]
Click-through rate (CTR) prediction has become increasingly indispensable for various Internet applications.
Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features.
We propose a novel model-agnostic framework (i.e., ClickPrompt) where we incorporate CTR models to generate interaction-aware soft prompts.
arXiv Detail & Related papers (2023-10-13T16:37:53Z) - Dont Add, dont Miss: Effective Content Preserving Generation from
Pre-Selected Text Spans [27.569687461395002]
Controlled Text Reduction (CTR) task isolates the text generation step within typical summarization-style tasks.
We introduce a high-quality, open-source CTR model that tackles two prior key limitations.
We substantially improve the silver training data quality via GPT-4 distillation.
arXiv Detail & Related papers (2023-10-13T11:28:02Z) - DELTA: Dynamic Embedding Learning with Truncated Conscious Attention for
CTR Prediction [61.68415731896613]
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation.
We propose a model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction.
arXiv Detail & Related papers (2023-05-03T12:34:45Z) - TSI: an Ad Text Strength Indicator using Text-to-CTR and
Semantic-Ad-Similarity [16.10904771281746]
We propose an ad text strength indicator (TSI) which: (i) predicts the click-through-rate (CTR) for an input ad text, (ii) fetches similar existing ads to create a neighborhood around the input ad, and compares the predicted CTRs in the neighborhood to declare whether the input ad is strong or weak.
As suggestions for ad text improvement, TSI shows anonymized versions of superior ads (higher predicted CTR) in the neighborhood.
arXiv Detail & Related papers (2021-08-18T16:24:40Z) - Learning to Create Better Ads: Generation and Ranking Approaches for Ad
Creative Refinement [26.70647666598025]
We study approaches to refine the given ad text and image by: (i) generating new ad text, (ii) recommending keyphrases for new ad text, and (iii) recommending image tags (objects in image)
Based on A/B tests conducted by multiple advertisers, we form pairwise examples of inferior and superior ad creatives.
We also share broadly applicable insights from our experiments using data from the Yahoo Gemini ad platform.
arXiv Detail & Related papers (2020-08-17T16:46:28Z) - AliExpress Learning-To-Rank: Maximizing Online Model Performance without
Going Online [60.887637616379926]
This paper proposes an evaluator-generator framework for learning-to-rank.
It consists of an evaluator that generalizes to evaluate recommendations involving the context, and a generator that maximizes the evaluator score by reinforcement learning.
Our method achieves a significant improvement in terms of Conversion Rate (CR) over the industrial-level fine-tuned model in online A/B tests.
arXiv Detail & Related papers (2020-03-25T10:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.