Superhuman Artificial Intelligence Can Improve Human Decision Making by
Increasing Novelty
- URL: http://arxiv.org/abs/2303.07462v2
- Date: Fri, 14 Apr 2023 17:53:04 GMT
- Title: Superhuman Artificial Intelligence Can Improve Human Decision Making by
Increasing Novelty
- Authors: Minkyu Shin, Jin Kim, Bas van Opheusden, and Thomas L. Griffiths
- Abstract summary: We analyze more than 5.8 million move decisions made by professional Go players over the past 71 years.
We find that superhuman humans began to make significantly better decisions following the advent of superhuman AI.
- Score: 8.120494737877799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How will superhuman artificial intelligence (AI) affect human decision
making? And what will be the mechanisms behind this effect? We address these
questions in a domain where AI already exceeds human performance, analyzing
more than 5.8 million move decisions made by professional Go players over the
past 71 years (1950-2021). To address the first question, we use a superhuman
AI program to estimate the quality of human decisions across time, generating
58 billion counterfactual game patterns and comparing the win rates of actual
human decisions with those of counterfactual AI decisions. We find that humans
began to make significantly better decisions following the advent of superhuman
AI. We then examine human players' strategies across time and find that novel
decisions (i.e., previously unobserved moves) occurred more frequently and
became associated with higher decision quality after the advent of superhuman
AI. Our findings suggest that the development of superhuman AI programs may
have prompted human players to break away from traditional strategies and
induced them to explore novel moves, which in turn may have improved their
decision-making.
Related papers
- Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary [19.884253335528317]
Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process.
To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions.
Providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice.
arXiv Detail & Related papers (2024-11-02T18:33:28Z) - Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills [24.04643864795939]
People's decision-making abilities often fail to improve when they rely on AI for decision-support.
Most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking.
We introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice.
arXiv Detail & Related papers (2024-10-05T18:21:04Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Close the Gates: How we can keep the future human by choosing not to develop superhuman general-purpose artificial intelligence [0.20919309330073077]
In the coming years, humanity may irreversibly cross a threshold by creating general-purpose AI.
This would upend core aspects of human society, present many unprecedented risks, and is likely to be uncontrollable in several senses.
We can choose to not do so, starting by instituting hard limits on the computation that can be used to train and run neural networks.
With these limits in place, AI research and industry can focus on making both narrow and general-purpose AI that humans can understand and control, and from which we can reap enormous benefit.
arXiv Detail & Related papers (2023-11-15T23:41:12Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs
for Centaurs [22.52332536886295]
We present a novel formulation of the interaction between the human and the AI as a sequential game.
We show that in this case the AI's problem of helping bounded-rational humans make better decisions reduces to a Bayes-adaptive POMDP.
We discuss ways in which the machine can learn to improve upon its own limitations as well with the help of the human.
arXiv Detail & Related papers (2022-04-03T21:00:51Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Aligning Superhuman AI with Human Behavior: Chess as a Model System [5.236087378443016]
We develop Maia, a customized version of Alpha-Zero trained on human chess games, that predicts human moves at a much higher accuracy than existing engines.
For a dual task of predicting whether a human will make a large mistake on the next move, we develop a deep neural network that significantly outperforms competitive baselines.
arXiv Detail & Related papers (2020-06-02T18:12:52Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.