Normative Equivalence in Human-AI Cooperation: Behaviour, Not Identity, Drives Cooperation in Mixed-Agent Groups
- URL: http://arxiv.org/abs/2601.20487v2
- Date: Thu, 29 Jan 2026 12:58:00 GMT
- Title: Normative Equivalence in Human-AI Cooperation: Behaviour, Not Identity, Drives Cooperation in Mixed-Agent Groups
- Authors: Nico Mutzner, Taha Yasseri, Heiko Rauhut,
- Abstract summary: We study how integrating AI agents affects the emergence and maintenance of cooperative norms in small groups.<n>In our sample of 236 participants, we found that reciprocal group dynamics and behavioural inertia primarily drove cooperation.<n>Participants' behaviour followed the same normative logic across human and AI conditions, indicating that cooperation depended on group behaviour rather than partner identity.
- Score: 0.3823356975862005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The introduction of artificial intelligence (AI) agents into human group settings raises essential questions about how these novel participants influence cooperative social norms. While previous studies on human-AI cooperation have primarily focused on dyadic interactions, little is known about how integrating AI agents affects the emergence and maintenance of cooperative norms in small groups. This study addresses this gap through an online experiment using a repeated four-player Public Goods Game (PGG). Each group consisted of three human participants and one bot, which was framed either as human or AI and followed one of three predefined decision strategies: unconditional cooperation, conditional cooperation, or free-riding. In our sample of 236 participants, we found that reciprocal group dynamics and behavioural inertia primarily drove cooperation. These normative mechanisms operated identically across conditions, resulting in cooperation levels that did not differ significantly between human and AI labels. Furthermore, we found no evidence of differences in norm persistence in a follow-up Prisoner's Dilemma, or in participants' normative perceptions. Participants' behaviour followed the same normative logic across human and AI conditions, indicating that cooperation depended on group behaviour rather than partner identity. This supports a pattern of normative equivalence, in which the mechanisms that sustain cooperation function similarly in mixed human-AI and all human groups. These findings suggest that cooperative norms are flexible enough to extend to artificial agents, blurring the boundary between humans and AI in collective decision-making.
Related papers
- Human-Human-AI Triadic Programming: Uncovering the Role of AI Agent and the Value of Human Partner in Collaborative Learning [10.772613370888516]
Our work introduces human-human-AI (HHAI) triadic programming, where an AI agent serves as an additional collaborator rather than a substitute for a human partner.<n>In the triadic HHAI conditions, participants relied significantly less on AI-generated code in their work.<n>These findings demonstrate how triadic settings activate socially shared regulation of learning by making AI use visible and accountable to a human peer.
arXiv Detail & Related papers (2026-01-17T18:32:54Z) - Cooperation Through Indirect Reciprocity in Child-Robot Interactions [81.62347137438248]
We investigate whether indirect reciprocity can be transposed to children-robot interactions.<n>We find that IR extends to children and robots solving coordination dilemmas.<n>We observe that cooperating through multi-armed bandit algorithms is highly dependent on the strategies revealed by humans.
arXiv Detail & Related papers (2025-11-07T07:08:32Z) - When Trust Collides: Decoding Human-LLM Cooperation Dynamics through the Prisoner's Dilemma [10.143277649817096]
This study investigates human cooperative attitudes and behaviors toward large language models (LLMs) agents.<n>Results revealed significant effects of declared agent identity on most cooperation-related behaviors.<n>These findings contribute to our understanding of human adaptation in competitive cooperation with autonomous agents.
arXiv Detail & Related papers (2025-03-10T13:37:36Z) - Relational Norms for Human-AI Cooperation [3.8608750807106977]
How we interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy.<n>Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI's capacity to fulfill relationship-specific functions.
arXiv Detail & Related papers (2025-02-17T18:23:29Z) - Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration [50.657070334404835]
Collaborative Gym is a framework enabling asynchronous, tripartite interaction among agents, humans, and task environments.<n>We instantiate Co-Gym with three representative tasks in both simulated and real-world conditions.<n>Our findings reveal that collaborative agents consistently outperform their fully autonomous counterparts in task performance.
arXiv Detail & Related papers (2024-12-20T09:21:15Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.<n>We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.<n>We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Position: Towards Bidirectional Human-AI Alignment [109.57781720848669]
We argue that the research community should explicitly define and critically reflect on "alignment" to account for the bidirectional and dynamic relationship between humans and AI.<n>We introduce the Bidirectional Human-AI Alignment framework, which not only incorporates traditional efforts to align AI with human values but also introduces the critical, underexplored dimension of aligning humans with AI.
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Multi-Agent, Human-Agent and Beyond: A Survey on Cooperation in Social Dilemmas [15.785674974107204]
The study of cooperation within social dilemmas has long been a fundamental topic across various disciplines.
Recent advancements in Artificial Intelligence have significantly reshaped this field.
This survey examines three key areas at the intersection of AI and cooperation in social dilemmas.
arXiv Detail & Related papers (2024-02-27T07:31:30Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.