Interactively Learning Social Media Representations Improves News Source
Factuality Detection
- URL: http://arxiv.org/abs/2309.14966v1
- Date: Tue, 26 Sep 2023 14:36:19 GMT
- Title: Interactively Learning Social Media Representations Improves News Source
Factuality Detection
- Authors: Nikhil Mehta and Dan Goldwasser
- Abstract summary: Rapidly detecting fake news, especially as new events arise, is important to prevent misinformation.
We propose to approach this problem interactively, where humans can interact to help an automated system learn a better social media representation quality.
- Score: 31.172580066204635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of social media has enabled the widespread propagation of fake news,
text that is published with an intent to spread misinformation and sway
beliefs. Rapidly detecting fake news, especially as new events arise, is
important to prevent misinformation.
While prior works have tackled this problem using supervised learning
systems, automatedly modeling the complexities of the social media landscape
that enables the spread of fake news is challenging. On the contrary, having
humans fact check all news is not scalable. Thus, in this paper, we propose to
approach this problem interactively, where humans can interact to help an
automated system learn a better social media representation quality. On real
world events, our experiments show performance improvements in detecting
factuality of news sources, even after few human interactions.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - An Interactive Framework for Profiling News Media Sources [26.386860411085053]
We propose an interactive framework for news media profiling.
It combines the strengths of graph based news media profiling models, Pre-trained Large Language Models, and human insight.
With as little as 5 human interactions, our framework can rapidly detect fake and biased news media.
arXiv Detail & Related papers (2023-09-14T02:03:45Z) - Fake News Detection and Behavioral Analysis: Case of COVID-19 [0.22940141855172028]
"Infodemic" due to spread of fake news regarding the pandemic has been a global issue.
Readers could mistake fake news for real news, and consequently have less access to authentic information.
It is challenging to accurately identify fake news data in social media posts.
arXiv Detail & Related papers (2023-05-25T13:42:08Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Combining Machine Learning with Knowledge Engineering to detect Fake
News in Social Networks-a survey [0.7120858995754653]
In the news media and social media the information is spread highspeed but without accuracy and hence detection mechanism should be able to predict news fast enough to tackle the dissemination of fake news.
In this paper we present what is fake news, importance of fake news, overall impact of fake news on different areas, different ways to detect fake news on social media, existing detections algorithms that can help us to overcome the issue.
arXiv Detail & Related papers (2022-01-20T07:43:15Z) - A Study of Fake News Reading and Annotating in Social Media Context [1.0499611180329804]
We present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.
In a second run, we asked the participants to decide on the truthfulness of these articles.
We also describe a follow-up qualitative study with a similar scenario but this time with 7 expert fake news annotators.
arXiv Detail & Related papers (2021-09-26T08:11:17Z) - Stance Detection with BERT Embeddings for Credibility Analysis of
Information on Social Media [1.7616042687330642]
We propose a model for detecting fake news using stance as one of the features along with the content of the article.
Our work interprets the content with automatic feature extraction and the relevance of the text pieces.
The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.
arXiv Detail & Related papers (2021-05-21T10:46:43Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.