Artificial Intelligence & Cooperation
- URL: http://arxiv.org/abs/2012.06034v1
- Date: Thu, 10 Dec 2020 23:54:31 GMT
- Title: Artificial Intelligence & Cooperation
- Authors: Elisa Bertino, Finale Doshi-Velez, Maria Gini, Daniel Lopresti, and
David Parkes
- Abstract summary: The rise of Artificial Intelligence will bring with it an ever-increasing willingness to cede decision-making to machines.
But rather than just giving machines the power to make decisions that affect us, we need ways to work cooperatively with AI systems.
With success, cooperation between humans and AIs can build society just as human-human cooperation has.
- Score: 38.19500588776648
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of Artificial Intelligence (AI) will bring with it an
ever-increasing willingness to cede decision-making to machines. But rather
than just giving machines the power to make decisions that affect us, we need
ways to work cooperatively with AI systems. There is a vital need for research
in "AI and Cooperation" that seeks to understand the ways in which systems of
AIs and systems of AIs with people can engender cooperative behavior. Trust in
AI is also key: trust that is intrinsic and trust that can only be earned over
time. Here we use the term "AI" in its broadest sense, as employed by the
recent 20-Year Community Roadmap for AI Research (Gil and Selman, 2019),
including but certainly not limited to, recent advances in deep learning.
With success, cooperation between humans and AIs can build society just as
human-human cooperation has. Whether coming from an intrinsic willingness to be
helpful, or driven through self-interest, human societies have grown strong and
the human species has found success through cooperation. We cooperate "in the
small" -- as family units, with neighbors, with co-workers, with strangers --
and "in the large" as a global community that seeks cooperative outcomes around
questions of commerce, climate change, and disarmament. Cooperation has evolved
in nature also, in cells and among animals. While many cases involving
cooperation between humans and AIs will be asymmetric, with the human
ultimately in control, AI systems are growing so complex that, even today, it
is impossible for the human to fully comprehend their reasoning,
recommendations, and actions when functioning simply as passive observers.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - On the Utility of Accounting for Human Beliefs about AI Behavior in Human-AI Collaboration [9.371527955300323]
We develop a model of human beliefs that accounts for how humans reason about the behavior of their AI partners.
We then developed an AI agent that considers both human behavior and human beliefs in devising its strategy for working with humans.
arXiv Detail & Related papers (2024-06-10T06:39:37Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - Human-AI Collaboration in Real-World Complex Environment with
Reinforcement Learning [8.465957423148657]
We show that learning from humans is effective and that human-AI collaboration outperforms human-controlled and fully autonomous AI agents.
We develop a user interface to allow humans to assist AI agents effectively.
arXiv Detail & Related papers (2023-12-23T04:27:24Z) - Discriminatory or Samaritan -- which AI is needed for humanity? An
Evolutionary Game Theory Analysis of Hybrid Human-AI populations [0.5308606035361203]
We study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game.
We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AIs.
arXiv Detail & Related papers (2023-06-30T15:56:26Z) - Roots and Requirements for Collaborative AIs [0.0]
The AI as collaborator dream is different from computer tools that augment human intelligence (IA) or intermediate human collaboration.
Government advisory groups and leaders in AI have advocated for years that AIs should be transparent and effective collaborators.
Are AI teammates part of a solution? How artificially intelligent (AI) could and should they be?
arXiv Detail & Related papers (2023-03-21T17:27:38Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [65.60507052509406]
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter-and multi-disciplinary nature of the relationships between people and robots.
arXiv Detail & Related papers (2021-03-23T16:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.