Silence Speaks Volumes: Re-weighting Techniques for Under-Represented
Users in Fake News Detection
- URL: http://arxiv.org/abs/2308.02011v1
- Date: Thu, 3 Aug 2023 20:04:20 GMT
- Title: Silence Speaks Volumes: Re-weighting Techniques for Under-Represented
Users in Fake News Detection
- Authors: Mansooreh Karami, David Mosallanezhad, Paras Sheth, Huan Liu
- Abstract summary: A mere 1% of users generate the majority of the content on social networking sites.
The remaining users, though engaged to varying degrees, tend to be less active in content creation and largely silent.
We propose to leverage re-weighting techniques to make the silent majority heard, and in turn, investigate whether the cues from these users can improve the performance of the current models for the downstream task of fake news detection.
- Score: 25.5495085102178
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Social media platforms provide a rich environment for analyzing user
behavior. Recently, deep learning-based methods have been a mainstream approach
for social media analysis models involving complex patterns. However, these
methods are susceptible to biases in the training data, such as participation
inequality. Basically, a mere 1% of users generate the majority of the content
on social networking sites, while the remaining users, though engaged to
varying degrees, tend to be less active in content creation and largely silent.
These silent users consume and listen to information that is propagated on the
platform. However, their voice, attitude, and interests are not reflected in
the online content, making the decision of the current methods predisposed
towards the opinion of the active users. So models can mistake the loudest
users for the majority. We propose to leverage re-weighting techniques to make
the silent majority heard, and in turn, investigate whether the cues from these
users can improve the performance of the current models for the downstream task
of fake news detection.
Related papers
- Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Thread With Caution: Proactively Helping Users Assess and Deescalate
Tension in Their Online Discussions [13.455968033357065]
Incivility remains a major challenge for online discussion platforms.
Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users.
We propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in.
arXiv Detail & Related papers (2022-12-02T19:00:03Z) - Like Article, Like Audience: Enforcing Multimodal Correlations for
Disinformation Detection [20.394457328537975]
correlations between user-generated and user-shared content can be leveraged for detecting disinformation in online news articles.
We develop a multimodal learning algorithm for disinformation detection.
arXiv Detail & Related papers (2021-08-31T14:50:16Z) - SOK: Seeing and Believing: Evaluating the Trustworthiness of Twitter
Users [4.609388510200741]
Currently, there is no automated way of determining which news or users are credible and which are not.
In this work, we created a model which analysed the behaviour of50,000 politicians on Twitter.
We classified the political Twitter users as either trusted or untrusted using random forest, multilayer perceptron, and support vector machine.
arXiv Detail & Related papers (2021-07-16T17:39:32Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Learning User Embeddings from Temporal Social Media Data: A Survey [15.324014759254915]
We survey representative work on learning a concise latent user representation (a.k.a. user embedding) that can capture the main characteristics of a social media user.
The learned user embeddings can later be used to support different downstream user analysis tasks such as personality modeling, suicidal risk assessment and purchase decision prediction.
arXiv Detail & Related papers (2021-05-17T16:22:43Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.