The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation
Objectively?
- URL: http://arxiv.org/abs/2008.05701v1
- Date: Thu, 13 Aug 2020 05:53:24 GMT
- Title: The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation
Objectively?
- Authors: Kevin Roitero, Michael Soprano, Beatrice Portelli, Damiano Spina,
Vincenzo Della Mea, Giuseppe Serra, Stefano Mizzaro and Gianluca Demartini
- Abstract summary: We study whether crowdsourcing is an effective and reliable method to assess statements truthfulness during a pandemic.
We specifically target statements related to the COVID-19 health emergency, that is still ongoing at the time of the study.
In our experiment, crowd workers are asked to assess the truthfulness of statements, as well as to provide evidence for the assessments as a URL and a text justification.
- Score: 17.288917654501265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Misinformation is an ever increasing problem that is difficult to solve for
the research community and has a negative impact on the society at large. Very
recently, the problem has been addressed with a crowdsourcing-based approach to
scale up labeling efforts: to assess the truthfulness of a statement, instead
of relying on a few experts, a crowd of (non-expert) judges is exploited. We
follow the same approach to study whether crowdsourcing is an effective and
reliable method to assess statements truthfulness during a pandemic. We
specifically target statements related to the COVID-19 health emergency, that
is still ongoing at the time of the study and has arguably caused an increase
of the amount of misinformation that is spreading online (a phenomenon for
which the term "infodemic" has been used). By doing so, we are able to address
(mis)information that is both related to a sensitive and personal issue like
health and very recent as compared to when the judgment is done: two issues
that have not been analyzed in related work. In our experiment, crowd workers
are asked to assess the truthfulness of statements, as well as to provide
evidence for the assessments as a URL and a text justification. Besides showing
that the crowd is able to accurately judge the truthfulness of the statements,
we also report results on many different aspects, including: agreement among
workers, the effect of different aggregation functions, of scales
transformations, and of workers background / bias. We also analyze workers
behavior, in terms of queries submitted, URLs found / selected, text
justifications, and other behavioral data like clicks and mouse actions
collected by means of an ad hoc logger.
Related papers
- Correcting misinformation on social media with a large language model [14.69780455372507]
Real-world misinformation, often multimodal, can be misleading using diverse tactics like conflating correlation with causation.
Such misinformation is severely understudied, challenging to address, and harms various social domains, particularly on social media.
We propose MUSE, an LLM augmented with access to and credibility evaluation of up-to-date information.
arXiv Detail & Related papers (2024-03-17T10:59:09Z) - Mitigating Biases in Collective Decision-Making: Enhancing Performance in the Face of Fake News [4.413331329339185]
We study the influence these biases can have in the pervasive problem of fake news by evaluating human participants' capacity to identify false headlines.
By focusing on headlines involving sensitive characteristics, we gather a comprehensive dataset to explore how human responses are shaped by their biases.
We show that demographic factors, headline categories, and the manner in which information is presented significantly influence errors in human judgment.
arXiv Detail & Related papers (2024-03-11T12:08:08Z) - Measuring the Effect of Influential Messages on Varying Personas [67.1149173905004]
We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona might have upon seeing a news message.
The proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response.
This enables more accurate and comprehensive inference on the mental state of the persona.
arXiv Detail & Related papers (2023-05-25T21:01:00Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Resolving the Human Subjects Status of Machine Learning's Crowdworkers [29.008050084395958]
We investigate the appropriate designation of ML crowdsourcing studies.
We highlight two challenges posed by ML: the same set of workers can serve multiple roles and provide many sorts of information.
Our analysis exposes a potential loophole in the Common Rule, where researchers can elude research ethics oversight by splitting data collection and analysis into distinct studies.
arXiv Detail & Related papers (2022-06-08T17:55:01Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Drink Bleach or Do What Now? Covid-HeRA: A Study of Risk-Informed Health
Decision Making in the Presence of COVID-19 Misinformation [23.449057978351945]
We frame health misinformation as a risk assessment task.
We study the severity of each misinformation story and how readers perceive this severity.
We evaluate several traditional and state-of-the-art models and show there is a significant gap in performance.
arXiv Detail & Related papers (2020-10-17T08:34:57Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z) - Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals [53.484562601127195]
We point out the inability to infer behavioral conclusions from probing results.
We offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
arXiv Detail & Related papers (2020-06-01T15:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.