Cost-Effective Communication: An Auction-based Method for Language Agent Interaction
- URL: http://arxiv.org/abs/2511.13193v1
- Date: Mon, 17 Nov 2025 10:00:20 GMT
- Title: Cost-Effective Communication: An Auction-based Method for Language Agent Interaction
- Authors: Yijia Fan, Jusheng Zhang, Kaitong Cai, Jing Yang, Chengpei Tang, Jian Wang, Keze Wang,
- Abstract summary: We introduce the Dynamic Auction-based Language Agent (DALA), a novel framework that treats communication bandwidth as a scarce and tradable resource.<n>Our DALA achieves new state-of-the-art performance across seven challenging reasoning benchmarks, including 84.32% on MMLU and a 91.21% pass@1 rate on HumanEval.
- Score: 15.493640295624994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-agent systems (MAS) built on large language models (LLMs) often suffer from inefficient "free-for-all" communication, leading to exponential token costs and low signal-to-noise ratios that hinder their practical deployment. We challenge the notion that more communication is always beneficial, hypothesizing instead that the core issue is the absence of resource rationality. We argue that "free" communication, by ignoring the principle of scarcity, inherently breeds inefficiency and unnecessary expenses. To address this, we introduce the Dynamic Auction-based Language Agent (DALA), a novel framework that treats communication bandwidth as a scarce and tradable resource. Specifically, our DALA regards inter-agent communication as a centralized auction, where agents learn to bid for the opportunity to speak based on the predicted value density of their messages. Thus, our DALA intrinsically encourages agents to produce concise, informative messages while filtering out low-value communication. Extensive and comprehensive experiments demonstrate that our economically-driven DALA achieves new state-of-the-art performance across seven challenging reasoning benchmarks, including 84.32% on MMLU and a 91.21% pass@1 rate on HumanEval. Note that this is accomplished with remarkable efficiency, i.e., our DALA uses only 6.25 million tokens, a fraction of the resources consumed by current state-of-the-art methods on GSM8K. Further analysis reveals that our DALA cultivates the emergent skill of strategic silence, effectively adapting its communication strategies from verbosity to silence in a dynamical manner via resource constraints.
Related papers
- MT-PingEval: Evaluating Multi-Turn Collaboration with Private Information Games [70.37904949359938]
We evaluate language models in multi-turn interactions using a suite of collaborative games that require effective communication about private information.<n>We find that language models are unable to use interactive collaboration to improve over the non-interactive baseline scenario.<n>We analyze the linguistic features of these dialogues, assessing the roles of sycophancy, information density, and discourse coherence.
arXiv Detail & Related papers (2026-02-27T17:13:20Z) - Verification Required: The Impact of Information Credibility on AI Persuasion [13.454393198058398]
We introduce MixTalk, a strategic communication game for LLM-to-LLM interaction that models information credibility.<n>We evaluate state-of-the-art LLM agents in large-scale tournaments across three realistic deployment settings.<n>We propose Tournament Oracle Policy Distillation (TOPD), an offline method that distills tournament oracle policy from interaction logs and deploys it in-context at inference time.
arXiv Detail & Related papers (2026-02-01T02:22:28Z) - Enabling Agents to Communicate Entirely in Latent Space [19.98668682094137]
We propose Interlat (Inter-agent Latent Space Communication), a paradigm that leverages the last hidden states of an LLM as a representation of its mind for direct transmission.<n>An additional compression process further compresses latent communication via entirely latent space reasoning.<n>Experiments demonstrate that Interlat outperforms both fine-tuned chain-of-thought (CoT) prompting and single-agent baselines.
arXiv Detail & Related papers (2025-11-12T09:37:22Z) - In-Context Reinforcement Learning via Communicative World Models [49.00028802135605]
This work formulates ICRL as a two-agent emergent communication problem.<n>It introduces CORAL, a framework that learns a transferable communicative context.<n>Our experiments demonstrate that this approach enables the CA to achieve significant gains in sample efficiency.
arXiv Detail & Related papers (2025-08-08T19:23:23Z) - Communication-Efficient Hybrid Language Model via Uncertainty-Aware Opportunistic and Compressed Transmission [65.17811759381978]
Hybrid language model (HLM) generates draft tokens that are validated and corrected by a remote large language model (LLM)<n>We propose communication-efficient and uncertainty-aware HLM (CU-HLM)<n>We show that CU-HLM achieves up to 206$times$ higher token throughput by skipping 74.8% transmissions with 97.4% vocabulary compression, while maintaining 97.4% accuracy.
arXiv Detail & Related papers (2025-05-17T02:10:34Z) - Task-Oriented Semantic Communication in Large Multimodal Models-based Vehicle Networks [55.32199894495722]
We investigate an LMM-based vehicle AI assistant using a Large Language and Vision Assistant (LLaVA)<n>To reduce computational demands and shorten response time, we optimize LLaVA's image slicing to selectively focus on areas of utmost interest to users.<n>We construct a Visual Question Answering (VQA) dataset for traffic scenarios to evaluate effectiveness.
arXiv Detail & Related papers (2025-05-05T07:18:47Z) - Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications [60.63472821600567]
A novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed.
The challenge of efficiently allocating communication and computing resources is addressed through the application of Stackelberg hyper game theory.
Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources.
arXiv Detail & Related papers (2024-09-26T15:55:59Z) - Exchange-of-Thought: Enhancing Large Language Model Capabilities through
Cross-Model Communication [76.04373033082948]
Large Language Models (LLMs) have recently made significant strides in complex reasoning tasks through the Chain-of-Thought technique.
We propose Exchange-of-Thought (EoT), a novel framework that enables cross-model communication during problem-solving.
arXiv Detail & Related papers (2023-12-04T11:53:56Z) - Towards True Lossless Sparse Communication in Multi-Agent Systems [1.911678487931003]
Communication enables agents to cooperate to achieve their goals.
Recent work in learning sparse individualized communication suffers from high variance during training.
We use the information bottleneck to reframe sparsity as a representation learning problem.
arXiv Detail & Related papers (2022-11-30T20:43:34Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - The Enforcers: Consistent Sparse-Discrete Methods for Constraining
Informative Emergent Communication [5.432350993419402]
Communication enables agents to cooperate to achieve their goals.
Recent work in learning sparse communication suffers from high variance training where, the price of decreasing communication is a decrease in reward, particularly in cooperative tasks.
This research addresses the above issues by limiting the loss in reward of decreasing communication and eliminating the penalty for discretization.
arXiv Detail & Related papers (2022-01-19T07:31:06Z) - Gaussian Process Based Message Filtering for Robust Multi-Agent
Cooperation in the Presence of Adversarial Communication [5.161531917413708]
We consider the problem of providing robustness to adversarial communication in multi-agent systems.
We propose a communication architecture based on Graph Neural Networks (GNNs)
We show that our filtering method is able to reduce the impact that non-cooperative agents cause.
arXiv Detail & Related papers (2020-12-01T14:21:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.