Information Credibility in the Social Web: Contexts, Approaches, and
Open Issues
- URL: http://arxiv.org/abs/2001.09473v1
- Date: Sun, 26 Jan 2020 15:42:43 GMT
- Title: Information Credibility in the Social Web: Contexts, Approaches, and
Open Issues
- Authors: Gabriella Pasi and Marco Viviani
- Abstract summary: Credibility, also referred as believability, is a quality perceived by individuals, who are not always able to discern, with their own cognitive capacities, genuine information from fake one.
Several approaches have been proposed to automatically assess credibility in social media.
- Score: 2.2133187119466116
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In the Social Web scenario, large amounts of User-Generated Content (UGC) are
diffused through social media often without almost any form of traditional
trusted intermediaries. Therefore, the risk of running into misinformation is
not negligible. For this reason, assessing and mining the credibility of online
information constitutes nowadays a fundamental research issue. Credibility,
also referred as believability, is a quality perceived by individuals, who are
not always able to discern, with their own cognitive capacities, genuine
information from fake one. Hence, in the last years, several approaches have
been proposed to automatically assess credibility in social media. Many of them
are based on data-driven models, i.e., they employ machine learning techniques
to identify misinformation, but recently also model-driven approaches are
emerging, as well as graph-based approaches focusing on credibility
propagation, and knowledge-based ones exploiting Semantic Web technologies.
Three of the main contexts in which the assessment of information credibility
has been investigated concern: (i) the detection of opinion spam in review
sites, (ii) the detection of fake news in microblogging, and (iii) the
credibility assessment of online health-related information. In this article,
the main issues connected to the evaluation of information credibility in the
Social Web, which are shared by the above-mentioned contexts, are discussed. A
concise survey of the approaches and methodologies that have been proposed in
recent years to address these issues is also presented.
Related papers
- Finding Fake News Websites in the Wild [0.0860395700487494]
We propose a novel methodology for identifying websites responsible for creating and disseminating misinformation content.
We validate our approach on Twitter by examining various execution modes and contexts.
arXiv Detail & Related papers (2024-07-09T18:00:12Z) - How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models [95.44559524735308]
Large language or multimodal model based verification has been proposed to scale up online policing mechanisms for mitigating spread of false and harmful content.
We test the limits of improving foundation model performance without continual updating through an initial study of knowledge transfer.
Our results on two recent multi-modal fact-checking benchmarks, Mocheg and Fakeddit, indicate that knowledge transfer strategies can improve Fakeddit performance over the state-of-the-art by up to 1.7% and Mocheg performance by up to 2.9%.
arXiv Detail & Related papers (2024-06-29T08:39:07Z) - Explainable assessment of financial experts' credibility by classifying social media forecasts and checking the predictions with actual market data [6.817247544942709]
We propose a credibility assessment solution for financial creators in social media that combines Natural Language Processing and Machine Learning.
The reputation of the contributors is assessed by automatically classifying their forecasts on asset values by type and verifying these predictions with actual market data.
The system provides natural language explanations of its decisions based on a model-agnostic analysis of relevant features.
arXiv Detail & Related papers (2024-06-17T08:08:03Z) - ExFake: Towards an Explainable Fake News Detection Based on Content and
Social Context Information [0.0]
ExFake is an explainable fake news detection system based on content and context-level information.
An Explainable AI (XAI) assistant is also adopted to help online social networks (OSN) users develop good reflexes when faced with doubted information.
arXiv Detail & Related papers (2023-11-16T15:57:58Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Leveraging Social Interactions to Detect Misinformation on Social Media [25.017602051478768]
We address the problem using the data set created during the COVID-19 pandemic.
It contains cascades of tweets discussing information weakly labeled as reliable or unreliable, based on a previous evaluation of the information source.
We additionally leverage on network information. Following the homophily principle, we hypothesize that users who interact are generally interested in similar topics and spreading similar kind of news, which in turn is generally reliable or not.
arXiv Detail & Related papers (2023-04-06T10:30:04Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Attacking Open-domain Question Answering by Injecting Misinformation [116.25434773461465]
We study the risk of misinformation to Question Answering (QA) models by investigating the sensitivity of open-domain QA models to misinformation documents.
Experiments show that QA models are vulnerable to even small amounts of evidence contamination brought by misinformation.
We discuss the necessity of building a misinformation-aware QA system that integrates question-answering and misinformation detection.
arXiv Detail & Related papers (2021-10-15T01:55:18Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - An Approach for Time-aware Domain-based Social Influence Prediction [4.753874889216745]
This paper presents an approach incorporates semantic analysis and machine learning modules to measure and predict users' trustworthiness.
The evaluation of the conducted experiment validates the applicability of the incorporated machine learning techniques to predict highly trustworthy domain-based users.
arXiv Detail & Related papers (2020-01-19T10:39:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.