Trust and Believe -- Should We? Evaluating the Trustworthiness of
Twitter Users
- URL: http://arxiv.org/abs/2210.15214v1
- Date: Thu, 27 Oct 2022 06:57:19 GMT
- Title: Trust and Believe -- Should We? Evaluating the Trustworthiness of
Twitter Users
- Authors: Tanveer Khan and Antonis Michalas
- Abstract summary: Fake news on social media is a major problem with far-reaching negative repercussions on both individuals and society.
In this work, we create a model through which we hope to offer a solution that will instill trust in social network communities.
Our model analyses the behaviour of 50,000 politicians on Twitter and assigns an influence score for each evaluated user.
- Score: 5.695742189917657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social networking and micro-blogging services, such as Twitter, play an
important role in sharing digital information. Despite the popularity and
usefulness of social media, they are regularly abused by corrupt users. One of
these nefarious activities is so-called fake news -- a "virus" that has been
spreading rapidly thanks to the hospitable environment provided by social media
platforms. The extensive spread of fake news is now becoming a major problem
with far-reaching negative repercussions on both individuals and society.
Hence, the identification of fake news on social media is a problem of utmost
importance that has attracted the interest not only of the research community
but most of the big players on both sides - such as Facebook, on the industry
side, and political parties on the societal one. In this work, we create a
model through which we hope to be able to offer a solution that will instill
trust in social network communities. Our model analyses the behaviour of 50,000
politicians on Twitter and assigns an influence score for each evaluated user
based on several collected and analysed features and attributes. Next, we
classify political Twitter users as either trustworthy or untrustworthy using
random forest and support vector machine classifiers. An active learning model
has been used to classify any unlabeled ambiguous records from our dataset.
Finally, to measure the performance of the proposed model, we used accuracy as
the main evaluation metric.
Related papers
- Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts [4.06613683722116]
This project examines how to enhance users' exposure to and engagement with verified and ideologically balanced news on Twitter.
We created 28 bots that replied to users tweeting about sports, entertainment, or lifestyle with a contextual reply.
To test differential effects by gender of the bots, treated users were randomly assigned to receive responses by bots presented as female or male.
We find that the treated users followed more news accounts and the users in the female bot treatment were more likely to like news content than the control.
arXiv Detail & Related papers (2024-03-20T07:44:06Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - The Spread of Propaganda by Coordinated Communities on Social Media [43.2770127582382]
We analyze the spread of propaganda and its interplay with coordinated behavior on a large Twitter dataset about the 2019 UK general election.
The combination of the use of propaganda and coordinated behavior allows us to uncover the authenticity and harmfulness of the different communities.
arXiv Detail & Related papers (2021-09-27T13:39:10Z) - SOK: Seeing and Believing: Evaluating the Trustworthiness of Twitter
Users [4.609388510200741]
Currently, there is no automated way of determining which news or users are credible and which are not.
In this work, we created a model which analysed the behaviour of50,000 politicians on Twitter.
We classified the political Twitter users as either trusted or untrusted using random forest, multilayer perceptron, and support vector machine.
arXiv Detail & Related papers (2021-07-16T17:39:32Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Breaking the Communities: Characterizing community changing users using
text mining and graph machine learning on Twitter [0.0]
We study users who break their community on Twitter using natural language processing techniques and graph machine learning algorithms.
We collected 9 million Twitter messages from 1.5 million users and constructed the retweet networks.
We present a machine learning framework for social media users classification which detects "community breakers"
arXiv Detail & Related papers (2020-08-24T23:44:51Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.