Discriminatory or Samaritan -- which AI is needed for humanity? An
Evolutionary Game Theory Analysis of Hybrid Human-AI populations
- URL: http://arxiv.org/abs/2306.17747v2
- Date: Mon, 3 Jul 2023 21:19:28 GMT
- Title: Discriminatory or Samaritan -- which AI is needed for humanity? An
Evolutionary Game Theory Analysis of Hybrid Human-AI populations
- Authors: Tim Booker, Manuel Miranda, Jes\'us A. Moreno L\'opez, Jos\'e Mar\'ia
Ramos Fern\'andez, Max Reddel, Valeria Widler, Filippo Zimmaro, Alberto
Antonioni, The Anh Han
- Abstract summary: We study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game.
We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AIs.
- Score: 0.5308606035361203
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As artificial intelligence (AI) systems are increasingly embedded in our
lives, their presence leads to interactions that shape our behaviour,
decision-making, and social interactions. Existing theoretical research has
primarily focused on human-to-human interactions, overlooking the unique
dynamics triggered by the presence of AI. In this paper, resorting to methods
from evolutionary game theory, we study how different forms of AI influence the
evolution of cooperation in a human population playing the one-shot Prisoner's
Dilemma game in both well-mixed and structured populations. We found that
Samaritan AI agents that help everyone unconditionally, including defectors,
can promote higher levels of cooperation in humans than Discriminatory AI that
only help those considered worthy/cooperative, especially in slow-moving
societies where change is viewed with caution or resistance (small intensities
of selection). Intuitively, in fast-moving societies (high intensities of
selection), Discriminatory AIs promote higher levels of cooperation than
Samaritan AIs.
Related papers
- Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - When Are Combinations of Humans and AI Useful? [0.0]
We conducted a meta-analysis of over 100 recent studies reporting over 300 effect sizes.
We found that, on average, human-AI combinations performed significantly worse than the best of humans or AI alone.
arXiv Detail & Related papers (2024-05-09T20:23:15Z) - AI for social science and social science of AI: A Survey [47.5235291525383]
Recent advancements in artificial intelligence have sparked a rethinking of artificial general intelligence possibilities.
The increasing human-like capabilities of AI are also attracting attention in social science research.
arXiv Detail & Related papers (2024-01-22T10:57:09Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - On the Perception of Difficulty: Differences between Humans and AI [0.0]
Key challenge in the interaction of humans with AI is the estimation of difficulty for human and AI agents for single task instances.
Research in the field of human-AI interaction estimates the perceived difficulty of humans and AI independently from each other.
Research to date has not yet adequately examined the differences in the perceived difficulty of humans and AI.
arXiv Detail & Related papers (2023-04-19T16:42:54Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Natural Selection Favors AIs over Humans [18.750116414606698]
We argue that the most successful AI agents will likely have undesirable traits.
If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future.
To counteract these risks and evolutionary forces, we consider interventions such as carefully designing AI agents' intrinsic motivations.
arXiv Detail & Related papers (2023-03-28T17:59:12Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Artificial Intelligence & Cooperation [38.19500588776648]
The rise of Artificial Intelligence will bring with it an ever-increasing willingness to cede decision-making to machines.
But rather than just giving machines the power to make decisions that affect us, we need ways to work cooperatively with AI systems.
With success, cooperation between humans and AIs can build society just as human-human cooperation has.
arXiv Detail & Related papers (2020-12-10T23:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.