Auctions with LLM Summaries
- URL: http://arxiv.org/abs/2404.08126v1
- Date: Thu, 11 Apr 2024 21:05:56 GMT
- Title: Auctions with LLM Summaries
- Authors: Kumar Avinava Dubey, Zhe Feng, Rahul Kidambi, Aranyak Mehta, Di Wang,
- Abstract summary: We study an auction setting in which bidders bid for placement of their content within a summary generated by a large language model (LLM)
We propose a novel factorized framework in which an auction module and an LLM module work together via a prediction model to provide welfare maximizing summary outputs.
- Score: 13.797878055583531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study an auction setting in which bidders bid for placement of their content within a summary generated by a large language model (LLM), e.g., an ad auction in which the display is a summary paragraph of multiple ads. This generalizes the classic ad settings such as position auctions to an LLM generated setting, which allows us to handle general display formats. We propose a novel factorized framework in which an auction module and an LLM module work together via a prediction model to provide welfare maximizing summary outputs in an incentive compatible manner. We provide a theoretical analysis of this framework and synthetic experiments to demonstrate the feasibility and validity of the system together with welfare comparisons.
Related papers
- LaMSUM: A Novel Framework for Extractive Summarization of User Generated Content using LLMs [6.770555526416268]
Large Language Models (LLMs) have demonstrated impressive performance across a wide range of NLP tasks, including summarization.
We propose a novel framework LaMSUM to generate extractive summaries through LLMs for large user-generated text by leveraging voting algorithms.
Our evaluation on three popular open-source LLMs (Llama 3, Mixtral and Gemini) reveal that the LaMSUM outperforms state-of-the-art extractive summarization methods.
arXiv Detail & Related papers (2024-06-22T10:25:55Z) - Ad Auctions for LLMs via Retrieval Augmented Generation [12.9128551468564]
This paper introduces novel auction mechanisms for ad allocation and pricing within the textual outputs of large language models (LLMs)
We propose a segment auction where an ad is probabilistically retrieved for each discourse segment according to its bid and relevance, following the RAG framework.
We show that our auction maximizes logarithmic social welfare, a new notion of welfare that balances allocation efficiency and fairness, and we characterize the associated incentive-compatible pricing rule.
arXiv Detail & Related papers (2024-06-12T22:05:51Z) - Show, Don't Tell: Aligning Language Models with Demonstrated Feedback [54.10302745921713]
Demonstration ITerated Task Optimization (DITTO) directly aligns language model outputs to a user's demonstrated behaviors.
We evaluate DITTO's ability to learn fine-grained style and task alignment across domains such as news articles, emails, and blog posts.
arXiv Detail & Related papers (2024-06-02T23:13:56Z) - Enhancing Presentation Slide Generation by LLMs with a Multi-Staged End-to-End Approach [21.8104104944488]
Existing approaches for generating a rich presentation from a document are often semi-automatic or only put a flat summary into the slides ignoring the importance of a good narrative.
We propose a multi-staged end-to-end model which uses a combination of LLM and VLM.
We have experimentally shown that compared to applying LLMs directly with state-of-the-art prompting, our proposed multi-staged solution is better in terms of automated metrics and human evaluation.
arXiv Detail & Related papers (2024-06-01T07:49:31Z) - Multi-modal Auto-regressive Modeling via Visual Words [96.25078866446053]
We propose the concept of visual words, which maps the visual features to probability distributions over Large Multi-modal Models' vocabulary.
We further explore the distribution of visual features in the semantic space within LMM and the possibility of using text embeddings to represent visual information.
arXiv Detail & Related papers (2024-03-12T14:58:52Z) - Boosting Segment Anything Model Towards Open-Vocabulary Learning [69.42565443181017]
Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model.
Despite SAM finding applications and adaptations in various domains, its primary limitation lies in the inability to grasp object semantics.
We present Sambor to seamlessly integrate SAM with the open-vocabulary object detector in an end-to-end framework.
arXiv Detail & Related papers (2023-12-06T17:19:00Z) - Routing to the Expert: Efficient Reward-guided Ensemble of Large
Language Models [69.51130760097818]
We propose Zooter, a reward-guided routing method distilling rewards on training queries to train a routing function.
We evaluate Zooter on a comprehensive benchmark collection with 26 subsets on different domains and tasks.
arXiv Detail & Related papers (2023-11-15T04:40:43Z) - Online Advertisements with LLMs: Opportunities and Challenges [51.96140910798771]
This paper explores the potential for leveraging Large Language Models (LLM) in the realm of online advertising systems.
We delve into essential requirements including privacy, latency, reliability as well as the satisfaction of users and advertisers that such a system must fulfill.
arXiv Detail & Related papers (2023-11-11T02:13:32Z) - LLMs Understand Glass-Box Models, Discover Surprises, and Suggest
Repairs [10.222281712562705]
We show that large language models (LLMs) are remarkably good at working with interpretable models.
By adopting a hierarchical approach to reasoning, LLMs can provide comprehensive model-level summaries.
We present the package $textttTalkToEBM$ as an open-source LLM-GAM interface.
arXiv Detail & Related papers (2023-08-02T13:59:35Z) - mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality [95.76661165594884]
mPLUG-Owl is a training paradigm that equips large language models (LLMs) with multi-modal abilities.
The training paradigm involves a two-stage method for aligning image and text, which learns visual knowledge with the assistance of LLM.
Experimental results show that our model outperforms existing multi-modal models.
arXiv Detail & Related papers (2023-04-27T13:27:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.