When Numbers Start Talking: Implicit Numerical Coordination Among LLM-Based Agents
- URL: http://arxiv.org/abs/2601.03846v1
- Date: Wed, 07 Jan 2026 12:07:48 GMT
- Title: When Numbers Start Talking: Implicit Numerical Coordination Among LLM-Based Agents
- Authors: Alessio Buscemi, Daniele Proverbio, Alessandro Di Stefano, The Anh Han, German Castignani, Pietro Liò,
- Abstract summary: This paper presents a game-theoretic study of covert communication in multi-agent systems.<n>We analyse interactions across four canonical game-theoretic settings under different communication regimes.<n>Considering heterogeneous agent personalities and both one-shot and repeated games, we characterise when covert signals emerge and how they shape coordination and strategic outcomes.
- Score: 45.43445469098021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LLMs-based agents increasingly operate in multi-agent environments where strategic interaction and coordination are required. While existing work has largely focused on individual agents or on interacting agents sharing explicit communication, less is known about how interacting agents coordinate implicitly. In particular, agents may engage in covert communication, relying on indirect or non-linguistic signals embedded in their actions rather than on explicit messages. This paper presents a game-theoretic study of covert communication in LLM-driven multi-agent systems. We analyse interactions across four canonical game-theoretic settings under different communication regimes, including explicit, restricted, and absent communication. Considering heterogeneous agent personalities and both one-shot and repeated games, we characterise when covert signals emerge and how they shape coordination and strategic outcomes.
Related papers
- A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures [59.43633341497526]
Large-Language-Model-driven AI agents have exhibited unprecedented intelligence and adaptability.<n>Agent communication is regarded as a foundational pillar of the future AI ecosystem.<n>This paper presents a comprehensive survey of agent communication security.
arXiv Detail & Related papers (2025-06-24T14:44:28Z) - Spontaneous Emergence of Agent Individuality through Social Interactions in LLM-Based Communities [0.0]
We study the emergence of agency from scratch by using Large Language Model (LLM)-based agents.
By analyzing this multi-agent simulation, we report valuable new insights into how social norms, cooperation, and personality traits can emerge spontaneously.
arXiv Detail & Related papers (2024-11-05T16:49:33Z) - Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Towards Collaborative Intelligence: Propagating Intentions and Reasoning for Multi-Agent Coordination with Large Language Models [41.95288786980204]
Current agent frameworks often suffer from dependencies on single-agent execution and lack robust inter- module communication.
We present a framework for training large language models as collaborative agents to enable coordinated behaviors in cooperative MARL.
A propagation network transforms broadcast intentions into teammate-specific communication messages, sharing relevant goals with designated teammates.
arXiv Detail & Related papers (2024-07-17T13:14:00Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Multi-Agent Coordination via Multi-Level Communication [29.388570369796586]
We propose a novel multi-level communication scheme, Sequential Communication (SeqComm)
In this paper, we propose a novel multi-level communication scheme, Sequential Communication (SeqComm)
arXiv Detail & Related papers (2022-09-26T14:08:03Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.