Human Misperception of Generative-AI Alignment: A Laboratory Experiment
- URL: http://arxiv.org/abs/2502.14708v1
- Date: Thu, 20 Feb 2025 16:32:42 GMT
- Title: Human Misperception of Generative-AI Alignment: A Laboratory Experiment
- Authors: Kevin He, Ran Shorrer, Mengjia Xia,
- Abstract summary: We study people's perception of generative artificial intelligence (GenAI) alignment in the context of economic decision-making.<n>We find that people overestimate the degree of alignment between GenAI's choices and human choices.
- Score: 0.393259574660092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We conduct an incentivized laboratory experiment to study people's perception of generative artificial intelligence (GenAI) alignment in the context of economic decision-making. Using a panel of economic problems spanning the domains of risk, time preference, social preference, and strategic interactions, we ask human subjects to make choices for themselves and to predict the choices made by GenAI on behalf of a human user. We find that people overestimate the degree of alignment between GenAI's choices and human choices. In every problem, human subjects' average prediction about GenAI's choice is substantially closer to the average human-subject choice than it is to the GenAI choice. At the individual level, different subjects' predictions about GenAI's choice in a given problem are highly correlated with their own choices in the same problem. We explore the implications of people overestimating GenAI alignment in a simple theoretical model.
Related papers
- Generative to Agentic AI: Survey, Conceptualization, and Challenges [1.8592384822257952]
Agentic Artificial Intelligence (AI) builds upon Generative AI (GenAI)
It constitutes the next major step in the evolution of AI with much stronger reasoning and interaction capabilities.
The distinction between Agentic AI and GenAI remains less well understood.
arXiv Detail & Related papers (2025-04-26T09:47:00Z) - Human Trust in AI Search: A Large-Scale Experiment [0.07589017023705934]
generative artificial intelligence (GenAI) can influence what we buy, how we vote and our health.
No work establishes the causal effect of generative search designs on human trust.
We execute 12,000 search queries across seven countries, generating 80,000 real-time GenAI and traditional search results.
arXiv Detail & Related papers (2025-04-08T21:12:41Z) - Selective Response Strategies for GenAI [6.261444979025644]
The rise of Generative AI (GenAI) has significantly impacted human-based forums like Stack Overflow.<n>This creates a negative feedback loop, hindering the development of GenAI systems.<n>We show that selective response can potentially have a compounding effect on the data generation process.
arXiv Detail & Related papers (2025-02-02T09:27:02Z) - A theory of appropriateness with applications to generative artificial intelligence [56.23261221948216]
We need to understand how appropriateness guides human decision making in order to properly evaluate AI decision making and improve it.
This paper presents a theory of appropriateness: how it functions in human society, how it may be implemented in the brain, and what it means for responsible deployment of generative AI technology.
arXiv Detail & Related papers (2024-12-26T00:54:03Z) - "So what if I used GenAI?" -- Implications of Using Cloud-based GenAI in Software Engineering Research [0.0]
This paper sheds light on the various research aspects in which GenAI is used, thus raising awareness of its legal implications to novice and budding researchers.<n>We summarize key aspects regarding our current knowledge that every software researcher involved in using GenAI should be aware of to avoid critical mistakes that may expose them to liability claims.
arXiv Detail & Related papers (2024-12-10T06:18:15Z) - Early Adoption of Generative Artificial Intelligence in Computing Education: Emergent Student Use Cases and Perspectives in 2023 [38.83649319653387]
There is limited prior research on computing students' use and perceptions of GenAI.
We surveyed all computer science majors in a small engineering-focused R1 university.
We discuss the impact of our findings on the emerging conversation around GenAI and education.
arXiv Detail & Related papers (2024-11-17T20:17:47Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - Measuring Human Contribution in AI-Assisted Content Generation [66.06040950325969]
This study raises the research question of measuring human contribution in AI-assisted content generation.<n>By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - Model-based Maintenance and Evolution with GenAI: A Look into the Future [47.93555901495955]
We argue that Generative Artificial Intelligence (GenAI) can be used as a means to address the limitations of Model-Based Engineering (MBM&E)
We propose that GenAI can be used in MBM&E for: reducing engineers' learning curve, maximizing efficiency with recommendations, or serving as a reasoning tool to understand domain problems.
arXiv Detail & Related papers (2024-07-09T23:13:26Z) - Teacher agency in the age of generative AI: towards a framework of hybrid intelligence for learning design [0.0]
Generative AI (genAI) is being used in education for different purposes.
From the teachers' perspective, genAI can support activities such as learning design.
However, GenAI has the potential to negatively affect professional agency due to teachers' limited power.
arXiv Detail & Related papers (2024-07-09T08:28:05Z) - AutoML in The Wild: Obstacles, Workarounds, and Expectations [37.813441975457735]
This study focuses on understanding the limitations of AutoML encountered by users in their real-world practices.
Our findings reveal that users actively exercise user agency to overcome three major challenges arising from customizability, transparency, and privacy.
arXiv Detail & Related papers (2023-02-21T17:06:46Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.