SOK: Seeing and Believing: Evaluating the Trustworthiness of Twitter
Users
- URL: http://arxiv.org/abs/2107.08027v1
- Date: Fri, 16 Jul 2021 17:39:32 GMT
- Title: SOK: Seeing and Believing: Evaluating the Trustworthiness of Twitter
Users
- Authors: Tanveer Khan, Antonis Michalas
- Abstract summary: Currently, there is no automated way of determining which news or users are credible and which are not.
In this work, we created a model which analysed the behaviour of50,000 politicians on Twitter.
We classified the political Twitter users as either trusted or untrusted using random forest, multilayer perceptron, and support vector machine.
- Score: 4.609388510200741
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social networking and micro-blogging services, such as Twitter, play an
important role in sharing digital information. Despite the popularity and
usefulness of social media, there have been many instances where corrupted
users found ways to abuse it, as for instance, through raising or lowering
user's credibility. As a result, while social media facilitates an
unprecedented ease of access to information, it also introduces a new challenge
- that of ascertaining the credibility of shared information. Currently, there
is no automated way of determining which news or users are credible and which
are not. Hence, establishing a system that can measure the social media user's
credibility has become an issue of great importance. Assigning a credibility
score to a user has piqued the interest of not only the research community but
also most of the big players on both sides - such as Facebook, on the side of
industry, and political parties on the societal one. In this work, we created a
model which, we hope, will ultimately facilitate and support the increase of
trust in the social network communities. Our model collected data and analysed
the behaviour of~50,000 politicians on Twitter. Influence score, based on
several chosen features, was assigned to each evaluated user. Further, we
classified the political Twitter users as either trusted or untrusted using
random forest, multilayer perceptron, and support vector machine. An active
learning model was used to classify any unlabelled ambiguous records from our
dataset. Finally, to measure the performance of the proposed model, we used
precision, recall, F1 score, and accuracy as the main evaluation metrics.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Easy-access online social media metrics can effectively identify misinformation sharing users [41.94295877935867]
We find that higher tweet frequency is positively associated with low factuality in shared content, while account age is negatively associated with it.
Our findings show that relying on these easy-access social network metrics could serve as a low-barrier approach for initial identification of users who are more likely to spread misinformation.
arXiv Detail & Related papers (2024-08-27T16:41:13Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Trust and Believe -- Should We? Evaluating the Trustworthiness of
Twitter Users [5.695742189917657]
Fake news on social media is a major problem with far-reaching negative repercussions on both individuals and society.
In this work, we create a model through which we hope to offer a solution that will instill trust in social network communities.
Our model analyses the behaviour of 50,000 politicians on Twitter and assigns an influence score for each evaluated user.
arXiv Detail & Related papers (2022-10-27T06:57:19Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Misleading Repurposing on Twitter [3.0254442724635173]
We present the first in-depth and large-scale study of misleading repurposing.
A malicious user changes the identity of their social media account via, among other things, changes to the profile attributes in order to use the account for a new purpose while retaining their followers.
We propose a definition for the behavior and a methodology that uses supervised learning on data mined from the Internet Archive's Twitter Stream Grab to flag repurposed accounts.
arXiv Detail & Related papers (2020-10-20T20:19:01Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Breaking the Communities: Characterizing community changing users using
text mining and graph machine learning on Twitter [0.0]
We study users who break their community on Twitter using natural language processing techniques and graph machine learning algorithms.
We collected 9 million Twitter messages from 1.5 million users and constructed the retweet networks.
We present a machine learning framework for social media users classification which detects "community breakers"
arXiv Detail & Related papers (2020-08-24T23:44:51Z) - Sentiment Analysis on Social Media Content [0.0]
The aim of this paper is to present a model that can perform sentiment analysis of real data collected from Twitter.
Data in Twitter is highly unstructured which makes it difficult to analyze.
Our proposed model is different from prior work in this field because it combined the use of supervised and unsupervised machine learning algorithms.
arXiv Detail & Related papers (2020-07-04T17:03:30Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.