On the Perception of Difficulty: Differences between Humans and AI
- URL: http://arxiv.org/abs/2304.09803v1
- Date: Wed, 19 Apr 2023 16:42:54 GMT
- Title: On the Perception of Difficulty: Differences between Humans and AI
- Authors: Philipp Spitzer, Joshua Holstein, Michael V\"ossing, Niklas K\"uhl
- Abstract summary: Key challenge in the interaction of humans with AI is the estimation of difficulty for human and AI agents for single task instances.
Research in the field of human-AI interaction estimates the perceived difficulty of humans and AI independently from each other.
Research to date has not yet adequately examined the differences in the perceived difficulty of humans and AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increased adoption of artificial intelligence (AI) in industry and
society, effective human-AI interaction systems are becoming increasingly
important. A central challenge in the interaction of humans with AI is the
estimation of difficulty for human and AI agents for single task
instances.These estimations are crucial to evaluate each agent's capabilities
and, thus, required to facilitate effective collaboration. So far, research in
the field of human-AI interaction estimates the perceived difficulty of humans
and AI independently from each other. However, the effective interaction of
human and AI agents depends on metrics that accurately reflect each agent's
perceived difficulty in achieving valuable outcomes. Research to date has not
yet adequately examined the differences in the perceived difficulty of humans
and AI. Thus, this work reviews recent research on the perceived difficulty in
human-AI interaction and contributing factors to consistently compare each
agent's perceived difficulty, e.g., creating the same prerequisites.
Furthermore, we present an experimental design to thoroughly examine the
perceived difficulty of both agents and contribute to a better understanding of
the design of such systems.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - The Model Mastery Lifecycle: A Framework for Designing Human-AI Interaction [0.0]
The utilization of AI in an increasing number of fields is the latest iteration of a long process.
There is an urgent need for methods to determine how AI should be used in different situations.
arXiv Detail & Related papers (2024-08-23T01:00:32Z) - Problem Solving Through Human-AI Preference-Based Cooperation [74.39233146428492]
We propose HAI-Co2, a novel human-AI co-construction framework.
We formalize HAI-Co2 and discuss the difficult open research problems that it faces.
We present a case study of HAI-Co2 and demonstrate its efficacy compared to monolithic generative AI models.
arXiv Detail & Related papers (2024-08-14T11:06:57Z) - When combinations of humans and AI are useful: A systematic review and meta-analysis [0.0]
We conducted a meta-analysis of over 100 recent studies reporting over 300 effect sizes.
We found that, on average, human-AI combinations performed significantly worse than the best of humans or AI alone.
arXiv Detail & Related papers (2024-05-09T20:23:15Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - Discriminatory or Samaritan -- which AI is needed for humanity? An
Evolutionary Game Theory Analysis of Hybrid Human-AI populations [0.5308606035361203]
We study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game.
We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AIs.
arXiv Detail & Related papers (2023-06-30T15:56:26Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.