From Competition to Coordination: Market Making as a Scalable Framework for Safe and Aligned Multi-Agent LLM Systems
- URL: http://arxiv.org/abs/2511.17621v1
- Date: Tue, 18 Nov 2025 16:47:15 GMT
- Title: From Competition to Coordination: Market Making as a Scalable Framework for Safe and Aligned Multi-Agent LLM Systems
- Authors: Brendan Gho, Suman Muppavarapu, Afnan Shaik, Tyson Tsay, James Begin, Kevin Zhu, Archana Vaidheeswaran, Vasu Sharma,
- Abstract summary: We introduce a market-making framework for multi-agent large language model (LLM) coordination.<n>In this setup, each agent acts as a market participant, updating and trading probabilistic beliefs, to converge toward shared, truthful outcomes.<n> Empirically, we evaluate this approach across factual reasoning, ethical judgment, and commonsense inference tasks.
- Score: 5.165179548592513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As foundation models are increasingly deployed as interacting agents in multi-agent systems, their collective behavior raises new challenges for trustworthiness, transparency, and accountability. Traditional coordination mechanisms, such as centralized oversight or adversarial adjudication, struggle to scale and often obscure how decisions emerge. We introduce a market-making framework for multi-agent large language model (LLM) coordination that organizes agent interactions as structured economic exchanges. In this setup, each agent acts as a market participant, updating and trading probabilistic beliefs, to converge toward shared, truthful outcomes. By aligning local incentives with collective epistemic goals, the framework promotes self-organizing, verifiable reasoning without requiring external enforcement. Empirically, we evaluate this approach across factual reasoning, ethical judgment, and commonsense inference tasks. Market-based coordination yields accuracy gains of up to 10% over single-shot baselines while preserving interpretability and transparency of intermediate reasoning steps. Beyond these improvements, our findings demonstrate that economic coordination principles can operationalize accountability and robustness in multi-agent LLM systems, offering a scalable pathway toward self-correcting, socially responsible AI capable of maintaining trust and oversight in real world deployment scenarios.
Related papers
- AgenticSimLaw: A Juvenile Courtroom Multi-Agent Debate Simulation for Explainable High-Stakes Tabular Decision Making [0.6218206949753592]
We introduce AgenticSimLaw, a role-structured, multi-agent debate framework that provides transparent and controllable testtime reasoning.<n>Unlike black-box approaches, our courtroom-style orchestration explicitly defines agent roles.<n>We benchmark this framework on young adult recidivism prediction using the NLSY97 dataset.
arXiv Detail & Related papers (2026-01-29T16:26:10Z) - Why Keep Your Doubts to Yourself? Trading Visual Uncertainties in Multi-Agent Bandit Systems [21.356119126402902]
We introduce Agora, a framework that reframes coordination as a decentralized market for uncertainty.<n>A market-aware broker, extending Thompson Sampling, initiates collaboration and guides the system toward cost-efficient equilibria.<n>Results establish market-based coordination as a principled and scalable paradigm for building economically viable visual intelligence systems.
arXiv Detail & Related papers (2026-01-26T17:58:53Z) - Agentic Reasoning for Large Language Models [122.81018455095999]
Reasoning is a fundamental cognitive process underlying inference, problem-solving, and decision-making.<n>Large language models (LLMs) demonstrate strong reasoning capabilities in closed-world settings, but struggle in open-ended and dynamic environments.<n>Agentic reasoning marks a paradigm shift by reframing LLMs as autonomous agents that plan, act, and learn through continual interaction.
arXiv Detail & Related papers (2026-01-18T18:58:23Z) - Evaluating Generalization Capabilities of LLM-Based Agents in Mixed-Motive Scenarios Using Concordia [100.74015791021044]
Large Language Model (LLM) agents have demonstrated impressive capabilities for social interaction.<n>Existing evaluation methods fail to measure how well these capabilities generalize to novel social situations.<n>We present empirical results from the NeurIPS 2024 Concordia Contest, where agents were evaluated on their ability to achieve mutual gains.
arXiv Detail & Related papers (2025-12-03T00:11:05Z) - A General Incentives-Based Framework for Fairness in Multi-agent Resource Allocation [4.930376365020355]
We introduce the General Incentives-based Framework for Fairness (GIFF)<n>GIFF is a novel approach for fair multi-agent resource allocation that infers fair decision-making from standard value functions.
arXiv Detail & Related papers (2025-10-30T17:37:51Z) - Magentic Marketplace: An Open-Source Environment for Studying Agentic Markets [74.91125572848439]
We study two-sided agentic marketplaces where Assistant agents represent consumers and Service agents represent competing businesses.<n>This environment enables us to study key market dynamics: the utility agents achieve, behavioral biases, vulnerability to manipulation, and how search mechanisms shape market outcomes.<n>Our experiments show that frontier models can approach optimal welfare-- but only under ideal search conditions. Performance degrades sharply with scale, and all models exhibit severe first-proposal bias, creating 10-30x advantages for response speed over quality.
arXiv Detail & Related papers (2025-10-27T18:35:59Z) - Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games [87.5673042805229]
How large language models balance self-interest and collective well-being is a critical challenge for ensuring alignment, robustness, and safe deployment.<n>We adapt a public goods game with institutional choice from behavioral economics, allowing us to observe how different LLMs navigate social dilemmas.<n>Surprisingly, we find that reasoning LLMs, such as the o1 series, struggle significantly with cooperation.
arXiv Detail & Related papers (2025-06-29T15:02:47Z) - Human-AI Governance (HAIG): A Trust-Utility Approach [0.0]
This paper introduces the HAIG framework for analysing trust dynamics across evolving human-AI relationships.<n>Our analysis reveals how technical advances in self-supervision, reasoning authority, and distributed decision-making drive non-uniform trust evolution.
arXiv Detail & Related papers (2025-05-03T01:57:08Z) - Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models [91.24296813969003]
This paper advocates integrating causal methods into machine learning to navigate the trade-offs among key principles of trustworthy ML.<n>We argue that a causal approach is essential for balancing multiple competing objectives in both trustworthy ML and foundation models.
arXiv Detail & Related papers (2025-02-28T14:57:33Z) - Fairness in Agentic AI: A Unified Framework for Ethical and Equitable Multi-Agent System [0.0]
This paper introduces a novel framework where fairness is treated as a dynamic, emergent property of agent interactions.<n>The framework integrates fairness constraints, bias mitigation strategies, and incentive mechanisms to align autonomous agent behaviors with societal values.
arXiv Detail & Related papers (2025-02-11T04:42:00Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.