AI Credibility Signals Outrank Institutions and Engagement in Shaping News Perception on Social Media
- URL: http://arxiv.org/abs/2511.02370v1
- Date: Tue, 04 Nov 2025 08:46:54 GMT
- Title: AI Credibility Signals Outrank Institutions and Engagement in Shaping News Perception on Social Media
- Authors: Adnan Hoq, Matthew Facciani, Tim Weninger,
- Abstract summary: We present a large-scale mixed-design experiment investigating how AI-generated credibility scores affect user perception of political news.<n>Our results reveal that AI feedback significantly moderates partisan bias and institutional distrust, surpassing traditional engagement signals such as likes and shares.
- Score: 4.197003225775791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI-generated content is rapidly becoming a salient component of online information ecosystems, yet its influence on public trust and epistemic judgments remains poorly understood. We present a large-scale mixed-design experiment (N = 1,000) investigating how AI-generated credibility scores affect user perception of political news. Our results reveal that AI feedback significantly moderates partisan bias and institutional distrust, surpassing traditional engagement signals such as likes and shares. These findings demonstrate the persuasive power of generative AI and suggest a need for design strategies that balance epistemic influence with user autonomy.
Related papers
- Preliminary Quantitative Study on Explainability and Trust in AI Systems [0.0]
Large-scale AI models such as GPT-4 have accelerated the deployment of artificial intelligence across critical domains including law, healthcare, and finance.<n>This study investigates the relationship between explainability and user trust in AI systems through a quantitative experimental design.
arXiv Detail & Related papers (2025-10-17T15:59:28Z) - AI Feedback Enhances Community-Based Content Moderation through Engagement with Counterarguments [0.0]
This study explores an AI-assisted hybrid moderation framework in which participants receive AI-generated feedback on their notes.<n>The results show that incorporating feedback improves the quality of notes, with the most substantial gains resulting from argumentative feedback.<n>The research contributes to ongoing discussions about AI's role in political content moderation.
arXiv Detail & Related papers (2025-07-10T18:52:50Z) - Must Read: A Systematic Survey of Computational Persuasion [60.83151988635103]
AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence.<n>Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion.
arXiv Detail & Related papers (2025-05-12T17:26:31Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Artificial Intelligence in Deliberation: The AI Penalty and the Emergence of a New Deliberative Divide [0.0]
Digital deliberation has expanded democratic participation, yet challenges remain.<n>Recent advances in artificial intelligence (AI) offer potential solutions, but public perceptions of AI's role in deliberation remain underexplored.<n>If AI is integrated into deliberation, public trust, acceptance, and willingness to participate may be affected.
arXiv Detail & Related papers (2025-03-10T16:33:15Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in large language models (LLMs) on political opinions and decision-making.<n>We found that participants exposed to partisan biased models were significantly more likely to adopt opinions and make decisions which matched the LLM's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - On the meaning of uncertainty for ethical AI: philosophy and practice [10.591284030838146]
We argue that this is a significant way to bring ethical considerations into mathematical reasoning.
We demonstrate these ideas within the context of competing models used to advise the UK government on the spread of the Omicron variant of COVID-19 during December 2021.
arXiv Detail & Related papers (2023-09-11T15:13:36Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.