Sockpuppet Detection: a Telegram case study
- URL: http://arxiv.org/abs/2105.10799v1
- Date: Sat, 22 May 2021 19:28:10 GMT
- Title: Sockpuppet Detection: a Telegram case study
- Authors: Gabriele Pisciotta, Miriana Somenzi, Elisa Barisani, Giulio Rossetti
- Abstract summary: In Online Social Networks (OSN) numerous are the cases in which users create multiple accounts that publicly seem to belong to different people but are actually fake identities of the same person.
These fictitious characters can be exploited to carry out abusive behaviors such as manipulating opinions, spreading fake news and disturbing other users.
In our work we focus on Telegram, a wide-spread instant messaging application, often known for its exploitation by members of organized crime and terrorism, and more in general for its high presence of people who have offensive behaviors.
- Score: 0.5620334754517148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Online Social Networks (OSN) numerous are the cases in which users create
multiple accounts that publicly seem to belong to different people but are
actually fake identities of the same person. These fictitious characters can be
exploited to carry out abusive behaviors such as manipulating opinions,
spreading fake news and disturbing other users. In literature this problem is
known as the Sockpuppet problem. In our work we focus on Telegram, a
wide-spread instant messaging application, often known for its exploitation by
members of organized crime and terrorism, and more in general for its high
presence of people who have offensive behaviors.
Related papers
- Unsupervised detection of coordinated fake-follower campaigns on social
media [1.3035246321276739]
We present a novel unsupervised detection method designed to target a specific category of malicious accounts.
Our framework identifies anomalous following patterns among all the followers of a social media account.
We find that these detected groups of anomalous followers exhibit consistent behavior across multiple accounts.
arXiv Detail & Related papers (2023-10-31T12:30:29Z) - User Identity Linkage in Social Media Using Linguistic and Social
Interaction Features [11.781485566149994]
User identity linkage aims to reveal social media accounts likely to belong to the same natural person.
This work proposes a machine learning-based detection model, which uses multiple attributes of users' online activity.
The models efficacy is demonstrated on two cases on abusive and terrorism-related Twitter content.
arXiv Detail & Related papers (2023-08-22T15:10:38Z) - Trust and Believe -- Should We? Evaluating the Trustworthiness of
Twitter Users [5.695742189917657]
Fake news on social media is a major problem with far-reaching negative repercussions on both individuals and society.
In this work, we create a model through which we hope to offer a solution that will instill trust in social network communities.
Our model analyses the behaviour of 50,000 politicians on Twitter and assigns an influence score for each evaluated user.
arXiv Detail & Related papers (2022-10-27T06:57:19Z) - Detecting fake accounts through Generative Adversarial Network in online
social media [0.0]
This paper proposes a novel method using user similarity measures and the Generative Adversarial Network (GAN) algorithm to identify fake user accounts in the Twitter dataset.
Despite the problem's complexity, the method achieves an AUC rate of 80% in classifying and detecting fake accounts.
arXiv Detail & Related papers (2022-10-25T10:20:27Z) - Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and
Conspiracy Movements [67.39353554498636]
We perform a large-scale analysis of Telegram by collecting 35,382 different channels and over 130,000,000 messages.
We find some of the infamous activities also present on privacy-preserving services of the Dark Web, such as carding.
We propose a machine learning model that is able to identify fake channels with an accuracy of 86%.
arXiv Detail & Related papers (2021-11-26T14:53:31Z) - Identity Signals in Emoji Do not Influence Perception of Factual Truth
on Twitter [90.14874935843544]
Prior work has shown that Twitter users use skin-toned emoji as an act of self-representation to express their racial/ethnic identity.
We test whether this signal of identity can influence readers' perceptions about the content of a post containing that signal.
We find that neither emoji nor profile photo has an effect on how readers rate these facts.
arXiv Detail & Related papers (2021-05-07T10:56:19Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.