Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts
- URL: http://arxiv.org/abs/2403.13362v3
- Date: Sat, 30 Mar 2024 03:10:48 GMT
- Title: Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts
- Authors: Hadi Askari, Anshuman Chhabra, Bernhard Clemm von Hohenberg, Michael Heseltine, Magdalena Wojcieszak,
- Abstract summary: This project examines how to enhance users' exposure to and engagement with verified and ideologically balanced news on Twitter.
We created 28 bots that replied to users tweeting about sports, entertainment, or lifestyle with a contextual reply.
To test differential effects by gender of the bots, treated users were randomly assigned to receive responses by bots presented as female or male.
We find that the treated users followed more news accounts and the users in the female bot treatment were more likely to like news content than the control.
- Score: 4.06613683722116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Polarization, declining trust, and wavering support for democratic norms are pressing threats to U.S. democracy. Exposure to verified and quality news may lower individual susceptibility to these threats and make citizens more resilient to misinformation, populism, and hyperpartisan rhetoric. This project examines how to enhance users' exposure to and engagement with verified and ideologically balanced news in an ecologically valid setting. We rely on a large-scale two-week long field experiment (from 1/19/2023 to 2/3/2023) on 28,457 Twitter users. We created 28 bots utilizing GPT-2 that replied to users tweeting about sports, entertainment, or lifestyle with a contextual reply containing two hardcoded elements: a URL to the topic-relevant section of quality news organization and an encouragement to follow its Twitter account. To further test differential effects by gender of the bots, treated users were randomly assigned to receive responses by bots presented as female or male. We examine whether our over-time intervention enhances the following of news media organization, the sharing and the liking of news content and the tweeting about politics and the liking of political content. We find that the treated users followed more news accounts and the users in the female bot treatment were more likely to like news content than the control. Most of these results, however, were small in magnitude and confined to the already politically interested Twitter users, as indicated by their pre-treatment tweeting about politics. These findings have implications for social media and news organizations, and also offer direction for future work on how Large Language Models and other computational interventions can effectively enhance individual on-platform engagement with quality news and public affairs.
Related papers
- Engagement, Content Quality and Ideology over Time on the Facebook URL Dataset [3.443622476405787]
This study examines user engagement metrics related to news URLs in the U.S. from January 2017 to December 2020.
By incorporating the ideological alignment and quality of news sources, along with users' political preferences, we construct weighted averages of ideology and quality of news consumption for liberal, conservative, and moderate audiences.
We identify two significant shifts in trends for both metrics, each coinciding with changes in user engagement.
arXiv Detail & Related papers (2024-09-20T12:50:17Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Trust and Believe -- Should We? Evaluating the Trustworthiness of
Twitter Users [5.695742189917657]
Fake news on social media is a major problem with far-reaching negative repercussions on both individuals and society.
In this work, we create a model through which we hope to offer a solution that will instill trust in social network communities.
Our model analyses the behaviour of 50,000 politicians on Twitter and assigns an influence score for each evaluated user.
arXiv Detail & Related papers (2022-10-27T06:57:19Z) - Retweet-BERT: Political Leaning Detection Using Language Features and
Information Diffusion on Social Networks [30.143148646797265]
We introduce Retweet-BERT, a simple and scalable model to estimate the political leanings of Twitter users.
Our assumptions stem from patterns of networks and linguistics homophily among people who share similar ideologies.
arXiv Detail & Related papers (2022-07-18T02:18:20Z) - Reaching the bubble may not be enough: news media role in online
political polarization [58.720142291102135]
A way of reducing polarization would be by distributing cross-partisan news among individuals with distinct political orientations.
This study investigates whether this holds in the context of nationwide elections in Brazil and Canada.
arXiv Detail & Related papers (2021-09-18T11:34:04Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Breaking the Communities: Characterizing community changing users using
text mining and graph machine learning on Twitter [0.0]
We study users who break their community on Twitter using natural language processing techniques and graph machine learning algorithms.
We collected 9 million Twitter messages from 1.5 million users and constructed the retweet networks.
We present a machine learning framework for social media users classification which detects "community breakers"
arXiv Detail & Related papers (2020-08-24T23:44:51Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.