Truthful Aggregation of LLMs with an Application to Online Advertising
- URL: http://arxiv.org/abs/2405.05905v4
- Date: Tue, 08 Oct 2024 14:23:54 GMT
- Title: Truthful Aggregation of LLMs with an Application to Online Advertising
- Authors: Ermis Soumalias, Michael J. Curry, Sven Seuken,
- Abstract summary: We introduce MOSAIC, an auction mechanism that ensures that truthful reporting is a dominant strategy for advertisers.
We show that MOSAIC leads to high advertiser value and platform revenue with low computational overhead.
- Score: 11.552000005640203
- License:
- Abstract: The next frontier of online advertising is revenue generation from LLM-generated content. We consider a setting where advertisers aim to influence the responses of an LLM to align with their interests, while platforms seek to maximize advertiser value and ensure user satisfaction. The challenge is that advertisers' preferences generally conflict with those of the user, and advertisers may misreport their preferences. To address this, we introduce MOSAIC, an auction mechanism that ensures that truthful reporting is a dominant strategy for advertisers and that aligns the utility of each advertiser with their contribution to social welfare. Importantly, the mechanism operates without LLM fine-tuning or access to model weights and provably converges to the output of the optimally fine-tuned LLM as computational resources increase. Additionally, it can incorporate contextual information about advertisers, which significantly improves social welfare. Through experiments with a publicly available LLM, we show that MOSAIC leads to high advertiser value and platform revenue with low computational overhead. While our motivating application is online advertising, our mechanism can be applied in any setting with monetary transfers, making it a general-purpose solution for truthfully aggregating the preferences of self-interested agents over LLM-generated replies.
Related papers
- WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents [55.64361927346957]
We propose a neurosymbolic approach to learn rules gradient-free through large language models (LLMs)
Our embodied LLM agent "WALL-E" is built upon model-predictive control (MPC)
On open-world challenges in Minecraft and ALFWorld, WALL-E achieves higher success rates than existing methods.
arXiv Detail & Related papers (2024-10-09T23:37:36Z) - Ad Auctions for LLMs via Retrieval Augmented Generation [12.9128551468564]
This paper introduces novel auction mechanisms for ad allocation and pricing within the textual outputs of large language models (LLMs)
We propose a segment auction where an ad is probabilistically retrieved for each discourse segment according to its bid and relevance, following the RAG framework.
We show that our auction maximizes logarithmic social welfare, a new notion of welfare that balances allocation efficiency and fairness, and we characterize the associated incentive-compatible pricing rule.
arXiv Detail & Related papers (2024-06-12T22:05:51Z) - Scaling Up LLM Reviews for Google Ads Content Moderation [22.43127685744644]
Large language models (LLMs) are powerful tools for content moderation, but their inference costs and latency make them prohibitive for casual use on large datasets.
This study proposes a method for scaling up LLM reviews for content in Google Ads.
arXiv Detail & Related papers (2024-02-07T23:47:02Z) - Making Large Language Models Better Knowledge Miners for Online
Marketing with Progressive Prompting Augmentation [34.37733369078883]
We propose PAIR, a novel Progressive prompting Augmented mIning fRamework for harvesting marketing-oriented knowledge graph with LLMs.
In particular, we reduce the pure relation generation to an LLM based adaptive relation filtering process through the knowledge-empowered prompting technique.
In terms of online serving, we specialize in a small and white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-quality corpus provided by a strong teacher-LLM.
arXiv Detail & Related papers (2023-12-08T03:44:09Z) - The Adoption and Efficacy of Large Language Models: Evidence From Consumer Complaints in the Financial Industry [2.300664273021602]
This research explores the effect of Large Language Models (LLMs) on consumer complaints submitted to the Consumer Financial Protection Bureau from 2015 to 2024.
We analyzed over 1 million complaints and identified a significant increase in LLM usage following the release of ChatGPT.
Our findings suggest that facilitating access to LLMs can help firms better understand consumer concerns and level the playing field among consumers.
arXiv Detail & Related papers (2023-11-28T04:07:34Z) - Online Advertisements with LLMs: Opportunities and Challenges [51.96140910798771]
This paper explores the potential for leveraging Large Language Models (LLM) in the realm of online advertising systems.
We introduce a general framework for LLM advertisement, consisting of modification, bidding, prediction, and auction modules.
arXiv Detail & Related papers (2023-11-11T02:13:32Z) - Harnessing the Power of LLMs: Evaluating Human-AI Text Co-Creation
through the Lens of News Headline Generation [58.31430028519306]
This study explores how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process.
While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs.
arXiv Detail & Related papers (2023-10-16T15:11:01Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z) - A Cooperative-Competitive Multi-Agent Framework for Auto-bidding in
Online Advertising [53.636153252400945]
We propose a general Multi-Agent reinforcement learning framework for Auto-Bidding, namely MAAB, to learn the auto-bidding strategies.
Our approach outperforms several baseline methods in terms of social welfare and guarantees the ad platform's revenue.
arXiv Detail & Related papers (2021-06-11T08:07:14Z) - Modeling Influencer Marketing Campaigns In Social Networks [2.0303656145222857]
More than 3.8 billion people around the world actively use social media.
In this work, we present an agent-based model (ABM) that can simulate the dynamics of influencer advertizing campaigns.
arXiv Detail & Related papers (2021-06-03T11:01:06Z) - Dynamic Knapsack Optimization Towards Efficient Multi-Channel Sequential
Advertising [52.3825928886714]
We formulate the sequential advertising strategy optimization as a dynamic knapsack problem.
We propose a theoretically guaranteed bilevel optimization framework, which significantly reduces the solution space of the original optimization space.
To improve the exploration efficiency of reinforcement learning, we also devise an effective action space reduction approach.
arXiv Detail & Related papers (2020-06-29T18:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.