Ads that Talk Back: Implications and Perceptions of Injecting Personalized Advertising into LLM Chatbots
- URL: http://arxiv.org/abs/2409.15436v2
- Date: Sat, 04 Oct 2025 19:40:03 GMT
- Title: Ads that Talk Back: Implications and Perceptions of Injecting Personalized Advertising into LLM Chatbots
- Authors: Brian Jay Tang, Kaiwen Sun, Noah T. Curran, Florian Schaub, Kang G. Shin,
- Abstract summary: Companies have proposed exploring ad-based revenue streams for monetizing large language models (LLMs)<n>This paper investigates the implications of personalizing LLM advertisements to individual users.<n>We created an advertising dataset and an open-source LLM, Phi-4-Ads, fine-tuned to serve ads and flexibly adapt to user preferences.
- Score: 15.907632070023702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in large language models (LLMs) have enabled the creation of highly effective chatbots. However, the compute costs of widely deploying LLMs have raised questions about profitability. Companies have proposed exploring ad-based revenue streams for monetizing LLMs, which could serve as the new de facto platform for advertising. This paper investigates the implications of personalizing LLM advertisements to individual users via a between-subjects experiment with 179 participants. We developed a chatbot that embeds personalized product advertisements within LLM responses, inspired by similar forays by AI companies. The evaluation of our benchmarks showed that ad injection only slightly impacted LLM performance, particularly response desirability. Results revealed that participants struggled to detect ads, and even preferred LLM responses with hidden advertisements. Rather than clicking on our advertising disclosure, participants tried changing their advertising settings using natural language queries. We created an advertising dataset and an open-source LLM, Phi-4-Ads, fine-tuned to serve ads and flexibly adapt to user preferences.
Related papers
- Detecting RAG Advertisements Across Advertising Styles [24.080227437136585]
We develop a taxonomy of advertising styles for large language models (LLMs)<n>We simulate that advertisers may attempt to evade detection by changing their advertising style.<n>We evaluate a variety of ad-detection approaches with respect to their robustness.
arXiv Detail & Related papers (2026-03-05T08:16:21Z) - Mind the Gap! Choice Independence in Using Multilingual LLMs for Persuasive Co-Writing Tasks in Different Languages [51.96666324242191]
We analyze whether user utilization of novel writing assistants in a charity advertisement writing task is affected by the AI's performance in a second language.<n>We quantify the extent to which these patterns translate into the persuasiveness of generated charity advertisements.
arXiv Detail & Related papers (2025-02-13T17:49:30Z) - Advertiser Content Understanding via LLMs for Google Ads Safety [9.376815457907195]
This study proposes a method to understand advertiser's intent for content policy violations, using Large Language Models (LLMs)
We generate advertiser's content profile based on multiple signals from their ads, domains, targeting info, etc.
After minimal prompt tuning our method was able to reach 95% accuracy on a small test set.
arXiv Detail & Related papers (2024-09-10T00:57:51Z) - Truthful Aggregation of LLMs with an Application to Online Advertising [11.552000005640203]
We introduce MOSAIC, an auction mechanism that ensures that truthful reporting is a dominant strategy for advertisers.<n>We show that MOSAIC leads to high advertiser value and platform revenue with low computational overhead.
arXiv Detail & Related papers (2024-05-09T17:01:31Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.<n>It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.<n>Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - Large Language Models: A Survey [66.39828929831017]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.<n>LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - Detecting Generated Native Ads in Conversational Search [33.5694271503764]
Conversational search engines such as YouChat and Microsoft Copilot use large language models (LLMs) to generate responses to queries.
It is only a small step to also let the same technology insert ads within the generated responses.
Inserted ads would be reminiscent of native advertising and product placement.
arXiv Detail & Related papers (2024-02-07T14:22:51Z) - Online Advertisements with LLMs: Opportunities and Challenges [51.96140910798771]
This paper explores the potential for leveraging Large Language Models (LLM) in the realm of online advertising systems.
We introduce a general framework for LLM advertisement, consisting of modification, bidding, prediction, and auction modules.
arXiv Detail & Related papers (2023-11-11T02:13:32Z) - Long-Term Ad Memorability: Understanding & Generating Memorable Ads [54.23854539909078]
Despite the importance of long-term memory in marketing and brand building, until now, there has been no large-scale study on the memorability of ads.<n>We release the first memorability dataset, LAMBDA, consisting of 1749 participants and 2205 ads covering 276 brands.<n>Running statistical tests over different participant subpopulations and ad types, we find many interesting insights into what makes an ad memorable, e.g., fast-moving ads are more memorable than those with slower scenes.<n>We present a scalable method to build a high-quality memorable ad generation model by leveraging automatically annotated data.
arXiv Detail & Related papers (2023-09-01T10:27:04Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z) - Do Interruptions Pay Off? Effects of Interruptive Ads on Consumers
Willingness to Pay [79.9312329825761]
We present the results of a study designed to measure the impact of interruptive advertising on consumers willingness to pay for products bearing the advertiser's brand.
Our results contribute to the research on the economic impact of advertising, and introduce a method of measuring actual (as opposed to self-reported) willingness to pay in experimental marketing research.
arXiv Detail & Related papers (2020-05-14T09:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.