From SERPs to Sound: How Search Engine Result Pages and AI-generated Podcasts Interact to Influence User Attitudes on Controversial Topics
- URL: http://arxiv.org/abs/2601.11282v1
- Date: Fri, 16 Jan 2026 13:31:11 GMT
- Title: From SERPs to Sound: How Search Engine Result Pages and AI-generated Podcasts Interact to Influence User Attitudes on Controversial Topics
- Authors: Junjie Wang, Gaole He, Alisa Rieger, Ujwal Gadiraju,
- Abstract summary: We investigate how search engine result pages (SERPs) and AI-generated podcasts interact to shape user opinions.<n>A majority of users in our study corresponded to attitude change outcomes, and we found an effect of sequence on attitude change.<n>Our results further revealed a role of viewpoint bias and the degree of topic controversiality in shaping attitude change, although we found no effect of individual moderators.
- Score: 18.17104725797712
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compared to search engine result pages (SERPs), AI-generated podcasts represent a relatively new and relatively more passive modality of information consumption, delivering narratives in a naturally engaging format. As these two media increasingly converge in everyday information-seeking behavior, it is essential to explore how their interaction influences user attitudes, particularly in contexts involving controversial, value-laden, and often debated topics. Addressing this need, we aim to understand how information mediums of present-day SERPs and AI-generated podcasts interact to shape the opinions of users. To this end, through a controlled user study (N=483), we investigated user attitudinal effects of consuming information via SERPs and AI-generated podcasts, focusing on how the sequence and modality of exposure shape user opinions. A majority of users in our study corresponded to attitude change outcomes, and we found an effect of sequence on attitude change. Our results further revealed a role of viewpoint bias and the degree of topic controversiality in shaping attitude change, although we found no effect of individual moderators.
Related papers
- Human Cognitive Biases in Explanation-Based Interaction: The Case of Within and Between Session Order Effect [46.80756527630539]
Explanatory Interactive Learning (XIL) is a powerful interactive learning framework designed to enable users to customize and correct AI models by interacting with their explanations.<n>Recent studies have raised concerns that explanatory interaction may trigger order effects, a well-known cognitive bias in which the sequence of presented items influences users' trust and, critically, the quality of their feedback.<n>To clarify the interplay between order effects and explanatory interaction, we ran two larger-scale user studies designed to mimic common XIL tasks.
arXiv Detail & Related papers (2025-12-04T12:59:54Z) - AI summaries in online search influence users' attitudes [3.459756369056329]
This study examined how AI-generated summaries affect how users think about different issues.<n>Users perceived the AI summaries more useful when it emphasized health harms versus benefits.<n>These findings suggest that AI-generated search summaries can significantly shape public perceptions.
arXiv Detail & Related papers (2025-11-27T23:45:19Z) - AI Feedback Enhances Community-Based Content Moderation through Engagement with Counterarguments [0.0]
This study explores an AI-assisted hybrid moderation framework in which participants receive AI-generated feedback on their notes.<n>The results show that incorporating feedback improves the quality of notes, with the most substantial gains resulting from argumentative feedback.<n>The research contributes to ongoing discussions about AI's role in political content moderation.
arXiv Detail & Related papers (2025-07-10T18:52:50Z) - Exploring the Impact of Personality Traits on Conversational Recommender Systems: A Simulation with Large Language Models [70.180385882195]
This paper introduces a personality-aware user simulation for Conversational Recommender Systems (CRSs)<n>The user agent induces customizable personality traits and preferences, while the system agent possesses the persuasion capability to simulate realistic interaction in CRSs.<n> Experimental results demonstrate that state-of-the-art LLMs can effectively generate diverse user responses aligned with specified personality traits.
arXiv Detail & Related papers (2025-04-09T13:21:17Z) - Towards Investigating Biases in Spoken Conversational Search [10.120634413661929]
We review how biases and user attitude changes have been studied in screen-based web search.
We propose an experimental setup with variables, data, and instruments to explore biases in a voice-based setting like Spoken Conversational Search.
arXiv Detail & Related papers (2024-09-02T01:54:33Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities for critical thinking in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Dehumanizing Voice Technology: Phonetic & Experiential Consequences of
Restricted Human-Machine Interaction [0.0]
We show that requests lead to an in-crease in phonetic convergence and lower phonetic latency, and ultimately a more natural task experience for consumers.
We provide evidence that altering the required input to initiate a conversation with smart objects provokes systematic changes both in terms of consumers' subjective experience and objective phonetic changes in the human voice.
arXiv Detail & Related papers (2021-11-02T22:49:25Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Linking the Dynamics of User Stance to the Structure of Online
Discussions [6.853826783413853]
We investigate whether users' stance concerning contentious subjects is influenced by the online discussions they are exposed to.
We set up a series of predictive exercises based on machine learning models.
We find that the most informative features relate to the stance composition of the discussion in which users prefer to engage.
arXiv Detail & Related papers (2021-01-25T02:08:54Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.