AI summaries in online search influence users' attitudes
- URL: http://arxiv.org/abs/2511.22809v2
- Date: Thu, 04 Dec 2025 15:16:29 GMT
- Title: AI summaries in online search influence users' attitudes
- Authors: Yiwei Xu, Saloni Dash, Sungha Kang, Wang Liao, Emma S. Spiro,
- Abstract summary: This study examined how AI-generated summaries affect how users think about different issues.<n>Users perceived the AI summaries more useful when it emphasized health harms versus benefits.<n>These findings suggest that AI-generated search summaries can significantly shape public perceptions.
- Score: 3.459756369056329
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study examined how AI-generated summaries, which have become visually prominent in online search results, affect how users think about different issues. In a preregistered randomized controlled experiment, participants (N = 2,004) viewed mock search result pages varying in the presence (vs. absence), placement (top vs. middle), and stance (benefit-framed vs. harm-framed) of AI-generated summaries across four publicly debated topics. Compared to a no-summary control group, participants exposed to AI-generated summaries reported issue attitudes, behavioral intentions, and policy support that aligned more closely with the AI summary stance. The summaries placed at the top of the page produced stronger shifts in users' issue attitudes (but not behavioral intentions or policy support) than those placed at the middle of the page. We also observed moderating effects from issue familiarity and general trust toward AI. In addition, users perceived the AI summaries more useful when it emphasized health harms versus benefits. These findings suggest that AI-generated search summaries can significantly shape public perceptions, raising important implications for the design and regulation of AI-integrated information ecosystems.
Related papers
- Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems [47.03825808787752]
This paper transitions from literature review to practical countermeasures.<n>We report on improved AI-generated content through Large Language Models (LLMs) and multimodal systems.<n>We discuss mitigation strategies including LLM-based detection, inoculation approaches, and the dual-use nature of generative AI.
arXiv Detail & Related papers (2026-01-29T16:42:22Z) - From SERPs to Sound: How Search Engine Result Pages and AI-generated Podcasts Interact to Influence User Attitudes on Controversial Topics [18.17104725797712]
We investigate how search engine result pages (SERPs) and AI-generated podcasts interact to shape user opinions.<n>A majority of users in our study corresponded to attitude change outcomes, and we found an effect of sequence on attitude change.<n>Our results further revealed a role of viewpoint bias and the degree of topic controversiality in shaping attitude change, although we found no effect of individual moderators.
arXiv Detail & Related papers (2026-01-16T13:31:11Z) - AI Feedback Enhances Community-Based Content Moderation through Engagement with Counterarguments [0.0]
This study explores an AI-assisted hybrid moderation framework in which participants receive AI-generated feedback on their notes.<n>The results show that incorporating feedback improves the quality of notes, with the most substantial gains resulting from argumentative feedback.<n>The research contributes to ongoing discussions about AI's role in political content moderation.
arXiv Detail & Related papers (2025-07-10T18:52:50Z) - Artificial Intelligence in Deliberation: The AI Penalty and the Emergence of a New Deliberative Divide [0.0]
Digital deliberation has expanded democratic participation, yet challenges remain.<n>Recent advances in artificial intelligence (AI) offer potential solutions, but public perceptions of AI's role in deliberation remain underexplored.<n>If AI is integrated into deliberation, public trust, acceptance, and willingness to participate may be affected.
arXiv Detail & Related papers (2025-03-10T16:33:15Z) - Users Favor LLM-Generated Content -- Until They Know It's AI [0.0]
We investigate how individuals evaluate human and large langue models generated responses to popular questions when the source of the content is either concealed or disclosed.<n>Our findings indicate that, overall, participants tend to prefer AI-generated responses.<n>When the AI origin is revealed, this preference diminishes significantly, suggesting that evaluative judgments are influenced by the disclosure of the response's provenance.
arXiv Detail & Related papers (2025-02-23T11:14:02Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT [10.907980864371213]
This study focuses on playful interactions exhibited by users of a popular AI technology, ChatGPT.<n>We found that more than half (54%) of user discourse revolved around playful interactions.<n>It examines how these interactions can help users understand AI's agency, shape human-AI relationships, and provide insights for designing AI systems.
arXiv Detail & Related papers (2024-01-16T14:44:13Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.