AdSEE: Investigating the Impact of Image Style Editing on Advertisement
Attractiveness
- URL: http://arxiv.org/abs/2309.08159v1
- Date: Fri, 15 Sep 2023 04:52:49 GMT
- Title: AdSEE: Investigating the Impact of Image Style Editing on Advertisement
Attractiveness
- Authors: Liyao Jiang, Chenglin Li, Haolan Chen, Xiaodong Gao, Xinwang Zhong,
Yang Qiu, Shani Ye, Di Niu
- Abstract summary: We propose Advertisement Style Editing and Attractiveness Enhancement (AdSEE), which explores whether semantic editing to ads images can affect or alter the popularity of online advertisements.
We introduce StyleGAN-based facial semantic editing and inversion to ads images and train a click rate predictor attributing GAN-based face latent representations to click rates.
Online A/B tests performed over a period of 5 days have verified the increased click-through rates of AdSEE-edited samples as compared to a control group of original ads.
- Score: 25.531489722164178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online advertisements are important elements in e-commerce sites, social
media platforms, and search engines. With the increasing popularity of mobile
browsing, many online ads are displayed with visual information in the form of
a cover image in addition to text descriptions to grab the attention of users.
Various recent studies have focused on predicting the click rates of online
advertisements aware of visual features or composing optimal advertisement
elements to enhance visibility. In this paper, we propose Advertisement Style
Editing and Attractiveness Enhancement (AdSEE), which explores whether semantic
editing to ads images can affect or alter the popularity of online
advertisements. We introduce StyleGAN-based facial semantic editing and
inversion to ads images and train a click rate predictor attributing GAN-based
face latent representations in addition to traditional visual and textual
features to click rates. Through a large collected dataset named QQ-AD,
containing 20,527 online ads, we perform extensive offline tests to study how
different semantic directions and their edit coefficients may impact click
rates. We further design a Genetic Advertisement Editor to efficiently search
for the optimal edit directions and intensity given an input ad cover image to
enhance its projected click rates. Online A/B tests performed over a period of
5 days have verified the increased click-through rates of AdSEE-edited samples
as compared to a control group of original ads, verifying the relation between
image styles and ad popularity. We open source the code for AdSEE research at
https://github.com/LiyaoJiang1998/adsee.
Related papers
- Vision-guided and Mask-enhanced Adaptive Denoising for Prompt-based Image Editing [67.96788532285649]
We present a Vision-guided and Mask-enhanced Adaptive Editing (ViMAEdit) method with three key novel designs.
First, we propose to leverage image embeddings as explicit guidance to enhance the conventional textual prompt-based denoising process.
Second, we devise a self-attention-guided iterative editing area grounding strategy.
arXiv Detail & Related papers (2024-10-14T13:41:37Z) - Why am I Still Seeing This: Measuring the Effectiveness Of Ad Controls and Explanations in AI-Mediated Ad Targeting Systems [55.02903075972816]
We evaluate the effectiveness of Meta's "See less" ad control and the actionability of ad targeting explanations following the shift to AI-mediated targeting.
We find that utilizing the "See less" ad control for the topics we study does not significantly reduce the number of ads shown by Meta on these topics.
We find that the majority of ad targeting explanations for local ads made no reference to location-specific targeting criteria.
arXiv Detail & Related papers (2024-08-21T18:03:11Z) - Improving Generalization of Image Captioning with Unsupervised Prompt
Learning [63.26197177542422]
Generalization of Image Captioning (GeneIC) learns a domain-specific prompt vector for the target domain without requiring annotated data.
GeneIC aligns visual and language modalities with a pre-trained Contrastive Language-Image Pre-Training (CLIP) model.
arXiv Detail & Related papers (2023-08-05T12:27:01Z) - Discrimination through Image Selection by Job Advertisers on Facebook [79.21648699199648]
We propose and investigate the prevalence of a new means for discrimination in job advertising.
It combines both targeting and delivery -- through the disproportionate representation or exclusion of people of certain demographics in job ad images.
We use the Facebook Ad Library to demonstrate the prevalence of this practice.
arXiv Detail & Related papers (2023-06-13T03:43:58Z) - Boost CTR Prediction for New Advertisements via Modeling Visual Content [55.11267821243347]
We exploit the visual content in ads to boost the performance of CTR prediction models.
We learn the embedding for each visual ID based on the historical user-ad interactions accumulated in the past.
After incorporating the visual ID embedding in the CTR prediction model of Baidu online advertising, the average CTR of ads improves by 1.46%, and the total charge increases by 1.10%.
arXiv Detail & Related papers (2022-09-23T17:08:54Z) - TSI: an Ad Text Strength Indicator using Text-to-CTR and
Semantic-Ad-Similarity [16.10904771281746]
We propose an ad text strength indicator (TSI) which: (i) predicts the click-through-rate (CTR) for an input ad text, (ii) fetches similar existing ads to create a neighborhood around the input ad, and compares the predicted CTRs in the neighborhood to declare whether the input ad is strong or weak.
As suggestions for ad text improvement, TSI shows anonymized versions of superior ads (higher predicted CTR) in the neighborhood.
arXiv Detail & Related papers (2021-08-18T16:24:40Z) - VisualTextRank: Unsupervised Graph-based Content Extraction for
Automating Ad Text to Image Search [6.107273836558503]
We propose VisualTextRank as an unsupervised method to augment input ad text using semantically similar ads.
VisualTextRank builds on prior work on graph based context extraction.
Online tests with a simplified version led to a 28.7% increase in the usage of stock image search.
arXiv Detail & Related papers (2021-08-05T16:47:21Z) - Learning to Create Better Ads: Generation and Ranking Approaches for Ad
Creative Refinement [26.70647666598025]
We study approaches to refine the given ad text and image by: (i) generating new ad text, (ii) recommending keyphrases for new ad text, and (iii) recommending image tags (objects in image)
Based on A/B tests conducted by multiple advertisers, we form pairwise examples of inferior and superior ad creatives.
We also share broadly applicable insights from our experiments using data from the Yahoo Gemini ad platform.
arXiv Detail & Related papers (2020-08-17T16:46:28Z) - Do Interruptions Pay Off? Effects of Interruptive Ads on Consumers
Willingness to Pay [79.9312329825761]
We present the results of a study designed to measure the impact of interruptive advertising on consumers willingness to pay for products bearing the advertiser's brand.
Our results contribute to the research on the economic impact of advertising, and introduce a method of measuring actual (as opposed to self-reported) willingness to pay in experimental marketing research.
arXiv Detail & Related papers (2020-05-14T09:26:57Z) - Recommending Themes for Ad Creative Design via Visual-Linguistic
Representations [27.13752835161338]
We propose a theme (keyphrase) recommender system for ad creative strategists.
The theme recommender is based on aggregating results from a visual question answering (VQA) task.
We show that cross-modal representations lead to significantly better classification accuracy and ranking precision-recall metrics.
arXiv Detail & Related papers (2020-01-20T18:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.