Learning to Create Better Ads: Generation and Ranking Approaches for Ad
Creative Refinement
- URL: http://arxiv.org/abs/2008.07467v2
- Date: Wed, 2 Dec 2020 00:16:11 GMT
- Title: Learning to Create Better Ads: Generation and Ranking Approaches for Ad
Creative Refinement
- Authors: Shaunak Mishra, Manisha Verma, Yichao Zhou, Kapil Thadani, Wei Wang
- Abstract summary: We study approaches to refine the given ad text and image by: (i) generating new ad text, (ii) recommending keyphrases for new ad text, and (iii) recommending image tags (objects in image)
Based on A/B tests conducted by multiple advertisers, we form pairwise examples of inferior and superior ad creatives.
We also share broadly applicable insights from our experiments using data from the Yahoo Gemini ad platform.
- Score: 26.70647666598025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the online advertising industry, the process of designing an ad creative
(i.e., ad text and image) requires manual labor. Typically, each advertiser
launches multiple creatives via online A/B tests to infer effective creatives
for the target audience, that are then refined further in an iterative fashion.
Due to the manual nature of this process, it is time-consuming to learn,
refine, and deploy the modified creatives. Since major ad platforms typically
run A/B tests for multiple advertisers in parallel, we explore the possibility
of collaboratively learning ad creative refinement via A/B tests of multiple
advertisers. In particular, given an input ad creative, we study approaches to
refine the given ad text and image by: (i) generating new ad text, (ii)
recommending keyphrases for new ad text, and (iii) recommending image tags
(objects in image) to select new ad image. Based on A/B tests conducted by
multiple advertisers, we form pairwise examples of inferior and superior ad
creatives, and use such pairs to train models for the above tasks. For
generating new ad text, we demonstrate the efficacy of an encoder-decoder
architecture with copy mechanism, which allows some words from the (inferior)
input text to be copied to the output while incorporating new words associated
with higher click-through-rate. For the keyphrase and image tag recommendation
task, we demonstrate the efficacy of a deep relevance matching model, as well
as the relative robustness of ranking approaches compared to ad text generation
in cold-start scenarios with unseen advertisers. We also share broadly
applicable insights from our experiments using data from the Yahoo Gemini ad
platform.
Related papers
- CTR-Driven Advertising Image Generation with Multimodal Large Language Models [53.40005544344148]
We explore the use of Multimodal Large Language Models (MLLMs) for generating advertising images by optimizing for Click-Through Rate (CTR) as the primary objective.
To further improve the CTR of generated images, we propose a novel reward model to fine-tune pre-trained MLLMs through Reinforcement Learning (RL)
Our method achieves state-of-the-art performance in both online and offline metrics.
arXiv Detail & Related papers (2025-02-05T09:06:02Z) - DistinctAD: Distinctive Audio Description Generation in Contexts [62.58375366359421]
We propose DistinctAD, a framework for generating Audio Descriptions that emphasize distinctiveness to produce better narratives.
To address the domain gap, we introduce a CLIP-AD adaptation strategy that does not require additional AD corpora.
In Stage-II, DistinctAD incorporates two key innovations: (i) a Contextual Expectation-Maximization Attention (EMA) module that reduces redundancy by extracting common bases from consecutive video clips, and (ii) an explicit distinctive word prediction loss that filters out repeated words in the context.
arXiv Detail & Related papers (2024-11-27T09:54:59Z) - Empowering Visual Creativity: A Vision-Language Assistant to Image Editing Recommendations [109.65267337037842]
We introduce the task of Image Editing Recommendation (IER)
IER aims to automatically generate diverse creative editing instructions from an input image and a simple prompt representing the users' under-specified editing purpose.
We introduce Creativity-Vision Language Assistant(Creativity-VLA), a multimodal framework designed specifically for edit-instruction generation.
arXiv Detail & Related papers (2024-05-31T18:22:29Z) - Parallel Ranking of Ads and Creatives in Real-Time Advertising Systems [20.78133992969317]
We propose for the first time a novel architecture for online parallel estimation of ads and creatives ranking.
The online architecture enables sophisticated personalized creative modeling while reducing overall latency.
The offline joint model for CTR estimation allows mutual awareness and collaborative optimization between ads and creatives.
arXiv Detail & Related papers (2023-12-20T04:05:21Z) - Long-Term Ad Memorability: Understanding & Generating Memorable Ads [54.23854539909078]
Despite the importance of long-term memory in marketing and brand building, until now, there has been no large-scale study on the memorability of ads.
We release the first memorability dataset, LAMBDA, consisting of 1749 participants and 2205 ads covering 276 brands.
Running statistical tests over different participant subpopulations and ad types, we find many interesting insights into what makes an ad memorable, e.g., fast-moving ads are more memorable than those with slower scenes.
We present a scalable method to build a high-quality memorable ad generation model by leveraging automatically annotated data.
arXiv Detail & Related papers (2023-09-01T10:27:04Z) - Boost CTR Prediction for New Advertisements via Modeling Visual Content [55.11267821243347]
We exploit the visual content in ads to boost the performance of CTR prediction models.
We learn the embedding for each visual ID based on the historical user-ad interactions accumulated in the past.
After incorporating the visual ID embedding in the CTR prediction model of Baidu online advertising, the average CTR of ads improves by 1.46%, and the total charge increases by 1.10%.
arXiv Detail & Related papers (2022-09-23T17:08:54Z) - Persuasion Strategies in Advertisements [68.70313043201882]
We introduce an extensive vocabulary of persuasion strategies and build the first ad image corpus annotated with persuasion strategies.
We then formulate the task of persuasion strategy prediction with multi-modal learning.
We conduct a real-world case study on 1600 advertising campaigns of 30 Fortune-500 companies.
arXiv Detail & Related papers (2022-08-20T07:33:13Z) - Aspect-based Analysis of Advertising Appeals for Search Engine
Advertising [37.85305426549587]
We focus on exploring the effective A$3$ for different industries with the aim of assisting the ad creation process.
Our experiments demonstrated that different industries have their own effective A$3$ and that the identification of the A$3$ contributes to the estimation of advertising performance.
arXiv Detail & Related papers (2022-04-25T05:31:07Z) - TSI: an Ad Text Strength Indicator using Text-to-CTR and
Semantic-Ad-Similarity [16.10904771281746]
We propose an ad text strength indicator (TSI) which: (i) predicts the click-through-rate (CTR) for an input ad text, (ii) fetches similar existing ads to create a neighborhood around the input ad, and compares the predicted CTRs in the neighborhood to declare whether the input ad is strong or weak.
As suggestions for ad text improvement, TSI shows anonymized versions of superior ads (higher predicted CTR) in the neighborhood.
arXiv Detail & Related papers (2021-08-18T16:24:40Z) - Efficient Optimal Selection for Composited Advertising Creatives with
Tree Structure [24.13017090236483]
Ad creatives with enjoyable visual appearance may increase the click-through rate (CTR) of products.
We propose an Adaptive and Efficient ad creative Selection framework based on a tree structure.
Based on the tree structure, Thompson sampling is adapted with dynamic programming, leading to efficient exploration for potential ad creatives with the largest CTR.
arXiv Detail & Related papers (2021-03-02T03:39:41Z) - Recommending Themes for Ad Creative Design via Visual-Linguistic
Representations [27.13752835161338]
We propose a theme (keyphrase) recommender system for ad creative strategists.
The theme recommender is based on aggregating results from a visual question answering (VQA) task.
We show that cross-modal representations lead to significantly better classification accuracy and ranking precision-recall metrics.
arXiv Detail & Related papers (2020-01-20T18:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.