Cooperation Through Indirect Reciprocity in Child-Robot Interactions
- URL: http://arxiv.org/abs/2512.20621v1
- Date: Fri, 07 Nov 2025 07:08:32 GMT
- Title: Cooperation Through Indirect Reciprocity in Child-Robot Interactions
- Authors: Isabel Neto, Alexandre S. Pires, Filipa Correia, Fernando P. Santos,
- Abstract summary: We investigate whether indirect reciprocity can be transposed to children-robot interactions.<n>We find that IR extends to children and robots solving coordination dilemmas.<n>We observe that cooperating through multi-armed bandit algorithms is highly dependent on the strategies revealed by humans.
- Score: 81.62347137438248
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social interactions increasingly involve artificial agents, such as conversational or collaborative bots. Understanding trust and prosociality in these settings is fundamental to improve human-AI teamwork. Research in biology and social sciences has identified mechanisms to sustain cooperation among humans. Indirect reciprocity (IR) is one of them. With IR, helping someone can enhance an individual's reputation, nudging others to reciprocate in the future. Transposing IR to human-AI interactions is however challenging, as differences in human demographics, moral judgements, and agents' learning dynamics can affect how interactions are assessed. To study IR in human-AI groups, we combine laboratory experiments and theoretical modelling. We investigate whether 1) indirect reciprocity can be transposed to children-robot interactions; 2) artificial agents can learn to cooperate given children's strategies; and 3) how differences in learning algorithms impact human-AI cooperation. We find that IR extends to children and robots solving coordination dilemmas. Furthermore, we observe that the strategies revealed by children provide a sufficient signal for multi-armed bandit algorithms to learn cooperative actions. Beyond the experimental scenarios, we observe that cooperating through multi-armed bandit algorithms is highly dependent on the strategies revealed by humans.
Related papers
- Normative Equivalence in Human-AI Cooperation: Behaviour, Not Identity, Drives Cooperation in Mixed-Agent Groups [0.3823356975862005]
We study how integrating AI agents affects the emergence and maintenance of cooperative norms in small groups.<n>In our sample of 236 participants, we found that reciprocal group dynamics and behavioural inertia primarily drove cooperation.<n>Participants' behaviour followed the same normative logic across human and AI conditions, indicating that cooperation depended on group behaviour rather than partner identity.
arXiv Detail & Related papers (2026-01-28T11:01:09Z) - When Trust Collides: Decoding Human-LLM Cooperation Dynamics through the Prisoner's Dilemma [10.143277649817096]
This study investigates human cooperative attitudes and behaviors toward large language models (LLMs) agents.<n>Results revealed significant effects of declared agent identity on most cooperation-related behaviors.<n>These findings contribute to our understanding of human adaptation in competitive cooperation with autonomous agents.
arXiv Detail & Related papers (2025-03-10T13:37:36Z) - Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.<n>We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.<n>We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Human-Robot Mutual Learning through Affective-Linguistic Interaction and Differential Outcomes Training [Pre-Print] [0.3811184252495269]
We test how affective-linguistic communication, in combination with differential outcomes training, affects mutual learning in a human-robot context.
Taking inspiration from child- caregiver dynamics, our human-robot interaction setup consists of a (simulated) robot attempting to learn how best to communicate internal, homeostatically-controlled needs.
arXiv Detail & Related papers (2024-07-01T13:35:08Z) - Multi-Agent, Human-Agent and Beyond: A Survey on Cooperation in Social Dilemmas [15.785674974107204]
The study of cooperation within social dilemmas has long been a fundamental topic across various disciplines.
Recent advancements in Artificial Intelligence have significantly reshaped this field.
This survey examines three key areas at the intersection of AI and cooperation in social dilemmas.
arXiv Detail & Related papers (2024-02-27T07:31:30Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - Cooperative Artificial Intelligence [0.0]
We argue that there is a need for research on the intersection between game theory and artificial intelligence.
We discuss the problem of how an external agent can promote cooperation between artificial learners.
We show that the resulting cooperative outcome is stable in certain games even if the planning agent is turned off.
arXiv Detail & Related papers (2022-02-20T16:50:37Z) - Adversarial Attacks in Cooperative AI [0.0]
Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation.
Recent work in adversarial machine learning shows that models can be easily deceived into making incorrect decisions.
Cooperative AI might introduce new weaknesses not investigated in previous machine learning research.
arXiv Detail & Related papers (2021-11-29T07:34:12Z) - Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration [51.268988527778276]
We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations.
Our method co-optimizes a human policy and a robot policy in an interactive learning process.
arXiv Detail & Related papers (2021-08-13T03:14:43Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [65.60507052509406]
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter-and multi-disciplinary nature of the relationships between people and robots.
arXiv Detail & Related papers (2021-03-23T16:52:12Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.