Information Credibility in the Social Web: Contexts, Approaches, and
Open Issues
- URL: http://arxiv.org/abs/2001.09473v1
- Date: Sun, 26 Jan 2020 15:42:43 GMT
- Title: Information Credibility in the Social Web: Contexts, Approaches, and
Open Issues
- Authors: Gabriella Pasi and Marco Viviani
- Abstract summary: Credibility, also referred as believability, is a quality perceived by individuals, who are not always able to discern, with their own cognitive capacities, genuine information from fake one.
Several approaches have been proposed to automatically assess credibility in social media.
- Score: 2.2133187119466116
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In the Social Web scenario, large amounts of User-Generated Content (UGC) are
diffused through social media often without almost any form of traditional
trusted intermediaries. Therefore, the risk of running into misinformation is
not negligible. For this reason, assessing and mining the credibility of online
information constitutes nowadays a fundamental research issue. Credibility,
also referred as believability, is a quality perceived by individuals, who are
not always able to discern, with their own cognitive capacities, genuine
information from fake one. Hence, in the last years, several approaches have
been proposed to automatically assess credibility in social media. Many of them
are based on data-driven models, i.e., they employ machine learning techniques
to identify misinformation, but recently also model-driven approaches are
emerging, as well as graph-based approaches focusing on credibility
propagation, and knowledge-based ones exploiting Semantic Web technologies.
Three of the main contexts in which the assessment of information credibility
has been investigated concern: (i) the detection of opinion spam in review
sites, (ii) the detection of fake news in microblogging, and (iii) the
credibility assessment of online health-related information. In this article,
the main issues connected to the evaluation of information credibility in the
Social Web, which are shared by the above-mentioned contexts, are discussed. A
concise survey of the approaches and methodologies that have been proposed in
recent years to address these issues is also presented.
Related papers
- A Survey on Automatic Credibility Assessment of Textual Credibility Signals in the Era of Large Language Models [6.538395325419292]
Credibility assessment is fundamentally based on aggregating credibility signals.
Credibility signals provide a more granular, more easily explainable and widely utilizable information.
A growing body of research on automatic credibility assessment and detection of credibility signals can be characterized as highly fragmented and lacking mutual interconnections.
arXiv Detail & Related papers (2024-10-28T17:51:08Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - A Survey of Stance Detection on Social Media: New Directions and Perspectives [50.27382951812502]
stance detection has emerged as a crucial subfield within affective computing.
Recent years have seen a surge of research interest in developing effective stance detection methods.
This paper provides a comprehensive survey of stance detection techniques on social media.
arXiv Detail & Related papers (2024-09-24T03:06:25Z) - Explainable assessment of financial experts' credibility by classifying social media forecasts and checking the predictions with actual market data [6.817247544942709]
We propose a credibility assessment solution for financial creators in social media that combines Natural Language Processing and Machine Learning.
The reputation of the contributors is assessed by automatically classifying their forecasts on asset values by type and verifying these predictions with actual market data.
The system provides natural language explanations of its decisions based on a model-agnostic analysis of relevant features.
arXiv Detail & Related papers (2024-06-17T08:08:03Z) - ExFake: Towards an Explainable Fake News Detection Based on Content and
Social Context Information [0.0]
ExFake is an explainable fake news detection system based on content and context-level information.
An Explainable AI (XAI) assistant is also adopted to help online social networks (OSN) users develop good reflexes when faced with doubted information.
arXiv Detail & Related papers (2023-11-16T15:57:58Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Leveraging Social Interactions to Detect Misinformation on Social Media [25.017602051478768]
We address the problem using the data set created during the COVID-19 pandemic.
It contains cascades of tweets discussing information weakly labeled as reliable or unreliable, based on a previous evaluation of the information source.
We additionally leverage on network information. Following the homophily principle, we hypothesize that users who interact are generally interested in similar topics and spreading similar kind of news, which in turn is generally reliable or not.
arXiv Detail & Related papers (2023-04-06T10:30:04Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - An Approach for Time-aware Domain-based Social Influence Prediction [4.753874889216745]
This paper presents an approach incorporates semantic analysis and machine learning modules to measure and predict users' trustworthiness.
The evaluation of the conducted experiment validates the applicability of the incorporated machine learning techniques to predict highly trustworthy domain-based users.
arXiv Detail & Related papers (2020-01-19T10:39:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.