Algorithmic Collusion by Large Language Models
- URL: http://arxiv.org/abs/2404.00806v2
- Date: Wed, 27 Nov 2024 00:19:55 GMT
- Title: Algorithmic Collusion by Large Language Models
- Authors: Sara Fish, Yannai A. Gonczarowski, Ran I. Shorrer,
- Abstract summary: We conduct experiments with algorithmic pricing agents based on Large Language Models (LLMs)
We find that LLM-based agents are adept at pricing tasks, autonomously collude in oligopoly settings to the detriment of consumers, and variation in seemingly innocuous phrases in LLM instructions may increase collusion.
- Score: 0.08192907805418582
- License:
- Abstract: The rise of algorithmic pricing raises concerns of algorithmic collusion. We conduct experiments with algorithmic pricing agents based on Large Language Models (LLMs). We find that (1) LLM-based agents are adept at pricing tasks, (2) LLM-based pricing agents autonomously collude in oligopoly settings to the detriment of consumers, and (3) variation in seemingly innocuous phrases in LLM instructions ("prompts") may increase collusion. Novel off-path analysis techniques uncover price-war concerns as contributing to these phenomena. Our results extend to auction settings. Our findings uncover unique challenges to any future regulation of LLM-based pricing agents, and black-box pricing agents more broadly.
Related papers
- Universal Model Routing for Efficient LLM Inference [72.65083061619752]
We consider the problem of dynamic routing, where new, previously unobserved LLMs are available at test time.
We propose a new approach to this problem that relies on representing each LLM as a feature vector, derived based on predictions on a set of representative prompts.
We prove that these strategies are estimates of a theoretically optimal routing rule, and provide an excess risk bound to quantify their errors.
arXiv Detail & Related papers (2025-02-12T20:30:28Z) - Differentially Private Steering for Large Language Model Alignment [55.30573701583768]
We present the first study of aligning Large Language Models with private datasets.
Our work proposes the textitunderlinePrivate underlineSteering for LLM underlineAment (PSA) algorithm.
Our results show that PSA achieves DP guarantees for LLM alignment with minimal loss in performance.
arXiv Detail & Related papers (2025-01-30T17:58:36Z) - Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation [68.58373854950294]
We focus on causal reasoning and address the task of establishing causal relationships based on correlation information.
We introduce a prompting strategy for this problem that breaks the original task into fixed subquestions.
We evaluate our approach on an existing causal benchmark, Corr2Cause.
arXiv Detail & Related papers (2024-12-18T15:32:27Z) - Control Large Language Models via Divide and Conquer [94.48784966256463]
This paper investigates controllable generation for large language models (LLMs) with prompt-based control, focusing on Lexically Constrained Generation (LCG)
We evaluate the performance of LLMs on satisfying lexical constraints with prompt-based control, as well as their efficacy in downstream applications.
arXiv Detail & Related papers (2024-10-06T21:20:06Z) - Artificial Intelligence and Algorithmic Price Collusion in Two-sided Markets [9.053163124987535]
We examine how AI agents using Q-learning engage in tacit collusion in two-sided markets.
Our experiments reveal that AI-driven platforms achieve higher collusion levels compared to Bertrand competition.
Increased network externalities significantly enhance collusion, suggesting AI algorithms exploit them to maximize profits.
arXiv Detail & Related papers (2024-07-04T17:57:56Z) - Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model [28.288949710191158]
Large language models (LLMs) have showcased impressive multilingual machine translation ability.
Unlike encoder-decoder style models, decoder-only LLMs lack an explicit alignment between source and target contexts.
We propose to encourage LLMs to pay more attention to the source context from both source and target perspectives.
arXiv Detail & Related papers (2024-06-11T07:49:04Z) - By Fair Means or Foul: Quantifying Collusion in a Market Simulation with Deep Reinforcement Learning [1.5249435285717095]
This research employs an experimental oligopoly model of repeated price competition.
We investigate the strategies and emerging pricing patterns developed by the agents, which may lead to a collusive outcome.
Our findings indicate that RL-based AI agents converge to a collusive state characterized by the charging of supracompetitive prices.
arXiv Detail & Related papers (2024-06-04T15:35:08Z) - Measuring Bargaining Abilities of LLMs: A Benchmark and A Buyer-Enhancement Method [17.388837360641276]
This paper describes the Bargaining task as an asymmetric incomplete information game.
It allows us to quantitatively assess an agent's performance in the Bargain task.
We propose a novel approach called OG-Narrator that integrates a deterministic Offer Generator to control the price range of Buyer's offers.
arXiv Detail & Related papers (2024-02-24T13:36:58Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - LLMRefine: Pinpointing and Refining Large Language Models via Fine-Grained Actionable Feedback [65.84061725174269]
Recent large language models (LLM) are leveraging human feedback to improve their generation quality.
We propose LLMRefine, an inference time optimization method to refine LLM's output.
We conduct experiments on three text generation tasks, including machine translation, long-form question answering (QA), and topical summarization.
LLMRefine consistently outperforms all baseline approaches, achieving improvements up to 1.7 MetricX points on translation tasks, 8.1 ROUGE-L on ASQA, 2.2 ROUGE-L on topical summarization.
arXiv Detail & Related papers (2023-11-15T19:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.