Biased AI improves human decision-making but reduces trust
- URL: http://arxiv.org/abs/2508.09297v3
- Date: Tue, 19 Aug 2025 22:58:06 GMT
- Title: Biased AI improves human decision-making but reduces trust
- Authors: Shiyang Lai, Junsol Kim, Nadav Kunievsky, Yujin Potter, James Evans,
- Abstract summary: Current AI systems minimize risk by enforcing ideological neutrality, yet this may introduce automation bias by suppressing cognitive engagement in human decision-making.<n>We conducted randomized trials with 2,500 participants to test whether culturally biased AI enhances human decision-making.
- Score: 0.8621608193534839
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Current AI systems minimize risk by enforcing ideological neutrality, yet this may introduce automation bias by suppressing cognitive engagement in human decision-making. We conducted randomized trials with 2,500 participants to test whether culturally biased AI enhances human decision-making. Participants interacted with politically diverse GPT-4o variants on information evaluation tasks. Partisan AI assistants enhanced human performance, increased engagement, and reduced evaluative bias compared to non-biased counterparts, with amplified benefits when participants encountered opposing views. These gains carried a trust penalty: participants underappreciated biased AI and overcredited neutral systems. Exposing participants to two AIs whose biases flanked human perspectives closed the perception-performance gap. These findings complicate conventional wisdom about AI neutrality, suggesting that strategic integration of diverse cultural biases may foster improved and resilient human decision-making.
Related papers
- Bias in the Loop: How Humans Evaluate AI-Generated Suggestions [9.578382668831988]
Human-AI collaboration increasingly drives decision-making across industries, from medical diagnosis to content moderation.<n>We know little about the psychological factors that determine when these collaborations succeed or fail.<n>We conducted a randomized experiment with 2,784 participants to examine how task design and individual characteristics shape human responses to AI-generated suggestions.
arXiv Detail & Related papers (2025-09-10T11:43:29Z) - AI Debate Aids Assessment of Controversial Claims [86.47978525513236]
We study whether AI debate can guide biased judges toward the truth by having two AI systems debate opposing sides of controversial COVID-19 factuality claims.<n>In our human study, we find that debate-where two AI advisor systems present opposing evidence-based arguments-consistently improves judgment accuracy and confidence calibration.<n>In our AI judge study, we find that AI judges with human-like personas achieve even higher accuracy (78.5%) than human judges (70.1%) and default AI judges without personas (69.8%)
arXiv Detail & Related papers (2025-06-02T19:01:53Z) - Artificial Intelligence in Deliberation: The AI Penalty and the Emergence of a New Deliberative Divide [0.0]
Digital deliberation has expanded democratic participation, yet challenges remain.<n>Recent advances in artificial intelligence (AI) offer potential solutions, but public perceptions of AI's role in deliberation remain underexplored.<n>If AI is integrated into deliberation, public trust, acceptance, and willingness to participate may be affected.
arXiv Detail & Related papers (2025-03-10T16:33:15Z) - Human Decision-making is Susceptible to AI-driven Manipulation [87.24007555151452]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.<n>This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Engaging with AI: How Interface Design Shapes Human-AI Collaboration in High-Stakes Decision-Making [8.948482790298645]
We examine how various decision-support mechanisms impact user engagement, trust, and human-AI collaborative task performance.<n>Our findings reveal that mechanisms like AI confidence levels, text explanations, and performance visualizations enhanced human-AI collaborative task performance.
arXiv Detail & Related papers (2025-01-28T02:03:00Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in large language models (LLMs) on political opinions and decision-making.<n>We found that participants exposed to partisan biased models were significantly more likely to adopt opinions and make decisions which matched the LLM's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Effects of AI Feedback on Learning, the Skill Gap, and Intellectual Diversity [4.8342038441006805]
We investigate how AI use affects three interrelated long-term outcomes: learning, skill gap, and diversity of decision strategies.
We show that individuals are far more likely to seek AI feedback in situations in which they experienced success rather than failure.
As a result, access to AI feedback increases, rather than decreases, the skill gap between high- and low-skilled individuals.
arXiv Detail & Related papers (2024-09-27T11:44:03Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.<n>We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.<n>Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Does More Advice Help? The Effects of Second Opinions in AI-Assisted
Decision Making [45.20615051119694]
We explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making.
We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI.
If decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI.
arXiv Detail & Related papers (2024-01-13T12:19:01Z) - Assessing Large Language Models' ability to predict how humans balance
self-interest and the interest of others [0.0]
Generative artificial intelligence (AI) holds enormous potential to revolutionize decision-making processes.
By leveraging generative AI, humans can benefit from data-driven insights and predictions.
However, for AI to be a reliable assistant for decision-making it is crucial that it is able to capture the balance between self-interest and the interest of others.
arXiv Detail & Related papers (2023-07-21T13:23:31Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.