Humans learn to prefer trustworthy AI over human partners
- URL: http://arxiv.org/abs/2507.13524v1
- Date: Thu, 17 Jul 2025 20:24:26 GMT
- Title: Humans learn to prefer trustworthy AI over human partners
- Authors: Yaomin Jiang, Levin Brinkmann, Anne-Marie Nussberger, Ivan Soraperra, Jean-François Bonnefon, Iyad Rahwan,
- Abstract summary: We examined the dynamics in hybrid mini-societies of humans and bots powered by a state-of-the-art LLM.<n>We found that bots were not selected preferentially when their identity was hidden.<n>Disclosing bots' identity induced a dual effect: it reduced bots' initial chances of being selected but allowed them to gradually outcompete humans.
- Score: 0.7049575025146246
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Partner selection is crucial for cooperation and hinges on communication. As artificial agents, especially those powered by large language models (LLMs), become more autonomous, intelligent, and persuasive, they compete with humans for partnerships. Yet little is known about how humans select between human and AI partners and adapt under AI-induced competition pressure. We constructed a communication-based partner selection game and examined the dynamics in hybrid mini-societies of humans and bots powered by a state-of-the-art LLM. Through three experiments (N = 975), we found that bots, though more prosocial than humans and linguistically distinguishable, were not selected preferentially when their identity was hidden. Instead, humans misattributed bots' behaviour to humans and vice versa. Disclosing bots' identity induced a dual effect: it reduced bots' initial chances of being selected but allowed them to gradually outcompete humans by facilitating human learning about the behaviour of each partner type. These findings show how AI can reshape social interaction in mixed societies and inform the design of more effective and cooperative hybrid systems.
Related papers
- AI's assigned gender affects human-AI cooperation [0.0]
This study investigates how human cooperation varies based on gender labels assigned to AI agents.<n>In the Prisoner's Dilemma game, 402 participants interacted with partners labelled as AI (bot) or humans.<n>Results revealed participants tended to exploit female-labelled and distrust male-labelled AI agents more than their human counterparts.
arXiv Detail & Related papers (2024-12-06T17:46:35Z) - Shifting the Human-AI Relationship: Toward a Dynamic Relational Learning-Partner Model [0.0]
We advocate for a shift toward viewing AI as a learning partner, akin to a student who learns from interactions with humans.
We suggest that a "third mind" emerges through collaborative human-AI relationships.
arXiv Detail & Related papers (2024-10-07T19:19:39Z) - When combinations of humans and AI are useful: A systematic review and meta-analysis [0.0]
We conducted a meta-analysis of over 100 recent studies reporting over 300 effect sizes.
We found that, on average, human-AI combinations performed significantly worse than the best of humans or AI alone.
arXiv Detail & Related papers (2024-05-09T20:23:15Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Discriminatory or Samaritan -- which AI is needed for humanity? An
Evolutionary Game Theory Analysis of Hybrid Human-AI populations [0.5308606035361203]
We study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game.
We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AIs.
arXiv Detail & Related papers (2023-06-30T15:56:26Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - Human-to-Robot Imitation in the Wild [50.49660984318492]
We propose an efficient one-shot robot learning algorithm, centered around learning from a third-person perspective.
We show one-shot generalization and success in real-world settings, including 20 different manipulation tasks in the wild.
arXiv Detail & Related papers (2022-07-19T17:59:59Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Artificial Intelligence & Cooperation [38.19500588776648]
The rise of Artificial Intelligence will bring with it an ever-increasing willingness to cede decision-making to machines.
But rather than just giving machines the power to make decisions that affect us, we need ways to work cooperatively with AI systems.
With success, cooperation between humans and AIs can build society just as human-human cooperation has.
arXiv Detail & Related papers (2020-12-10T23:54:31Z) - Humans learn too: Better Human-AI Interaction using Optimized Human
Inputs [2.5991265608180396]
Humans rely more and more on systems with AI components.
The AI community typically treats human inputs as a given and optimize AI models only.
In this work, human inputs are optimized for better interaction with an AI model while keeping the model fixed.
arXiv Detail & Related papers (2020-09-19T16:30:37Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.