AI's assigned gender affects human-AI cooperation
- URL: http://arxiv.org/abs/2412.05214v1
- Date: Fri, 06 Dec 2024 17:46:35 GMT
- Title: AI's assigned gender affects human-AI cooperation
- Authors: Sepideh Bazazi, Jurgis Karpus, Taha Yasseri,
- Abstract summary: This study investigates how human cooperation varies based on gender labels assigned to AI agents.
In the Prisoner's Dilemma game, 402 participants interacted with partners labelled as AI (bot) or humans.
Results revealed participants tended to exploit female-labelled and distrust male-labelled AI agents more than their human counterparts.
- Score: 0.0
- License:
- Abstract: Cooperation between humans and machines is increasingly vital as artificial intelligence (AI) becomes more integrated into daily life. Research indicates that people are often less willing to cooperate with AI agents than with humans, more readily exploiting AI for personal gain. While prior studies have shown that giving AI agents human-like features influences people's cooperation with them, the impact of AI's assigned gender remains underexplored. This study investigates how human cooperation varies based on gender labels assigned to AI agents with which they interact. In the Prisoner's Dilemma game, 402 participants interacted with partners labelled as AI (bot) or humans. The partners were also labelled male, female, non-binary, or gender-neutral. Results revealed that participants tended to exploit female-labelled and distrust male-labelled AI agents more than their human counterparts, reflecting gender biases similar to those in human-human interactions. These findings highlight the significance of gender biases in human-AI interactions that must be considered in future policy, design of interactive AI systems, and regulation of their use.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Measuring Human Contribution in AI-Assisted Content Generation [66.06040950325969]
This study raises the research question of measuring human contribution in AI-assisted content generation.
By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - On the Utility of Accounting for Human Beliefs about AI Intention in Human-AI Collaboration [9.371527955300323]
We develop a model of human beliefs that captures how humans interpret and reason about their AI partner's intentions.
We create an AI agent that incorporates both human behavior and human beliefs when devising its strategy for interacting with humans.
arXiv Detail & Related papers (2024-06-10T06:39:37Z) - Discriminatory or Samaritan -- which AI is needed for humanity? An
Evolutionary Game Theory Analysis of Hybrid Human-AI populations [0.5308606035361203]
We study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game.
We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AIs.
arXiv Detail & Related papers (2023-06-30T15:56:26Z) - On the Perception of Difficulty: Differences between Humans and AI [0.0]
Key challenge in the interaction of humans with AI is the estimation of difficulty for human and AI agents for single task instances.
Research in the field of human-AI interaction estimates the perceived difficulty of humans and AI independently from each other.
Research to date has not yet adequately examined the differences in the perceived difficulty of humans and AI.
arXiv Detail & Related papers (2023-04-19T16:42:54Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Artificial Intelligence & Cooperation [38.19500588776648]
The rise of Artificial Intelligence will bring with it an ever-increasing willingness to cede decision-making to machines.
But rather than just giving machines the power to make decisions that affect us, we need ways to work cooperatively with AI systems.
With success, cooperation between humans and AIs can build society just as human-human cooperation has.
arXiv Detail & Related papers (2020-12-10T23:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.