SciClops: Detecting and Contextualizing Scientific Claims for Assisting
Manual Fact-Checking
- URL: http://arxiv.org/abs/2110.13090v1
- Date: Mon, 25 Oct 2021 16:35:58 GMT
- Title: SciClops: Detecting and Contextualizing Scientific Claims for Assisting
Manual Fact-Checking
- Authors: Panayiotis Smeros, Carlos Castillo, Karl Aberer
- Abstract summary: This paper describes SciClops, a method to help combat online scientific misinformation.
SciClops involves three main steps to process scientific claims found in online news articles and social media postings.
It effectively assists non-expert fact-checkers in the verification of complex scientific claims, outperforming commercial fact-checking systems.
- Score: 7.507186058512835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes SciClops, a method to help combat online scientific
misinformation. Although automated fact-checking methods have gained
significant attention recently, they require pre-existing ground-truth
evidence, which, in the scientific context, is sparse and scattered across a
constantly-evolving scientific literature. Existing methods do not exploit this
literature, which can effectively contextualize and combat science-related
fallacies. Furthermore, these methods rarely require human intervention, which
is essential for the convoluted and critical domain of scientific
misinformation. SciClops involves three main steps to process scientific claims
found in online news articles and social media postings: extraction,
clustering, and contextualization. First, the extraction of scientific claims
takes place using a domain-specific, fine-tuned transformer model. Second,
similar claims extracted from heterogeneous sources are clustered together with
related scientific literature using a method that exploits their content and
the connections among them. Third, check-worthy claims, broadcasted by popular
yet unreliable sources, are highlighted together with an enhanced fact-checking
context that includes related verified claims, news articles, and scientific
papers. Extensive experiments show that SciClops tackles sufficiently these
three steps, and effectively assists non-expert fact-checkers in the
verification of complex scientific claims, outperforming commercial
fact-checking systems.
Related papers
- Can Large Language Models Detect Misinformation in Scientific News
Reporting? [1.0344642971058586]
This paper investigates whether it is possible to use large language models (LLMs) to detect misinformation in scientific reporting.
We first present a new labeled dataset SciNews, containing 2.4k scientific news stories drawn from trusted and untrustworthy sources.
We identify dimensions of scientific validity in science news articles and explore how this can be integrated into the automated detection of scientific misinformation.
arXiv Detail & Related papers (2024-02-22T04:07:00Z) - Understanding Fine-grained Distortions in Reports of Scientific Findings [46.96512578511154]
Distorted science communication harms individuals and society as it can lead to unhealthy behavior change and decrease trust in scientific institutions.
Given the rapidly increasing volume of science communication in recent years, a fine-grained understanding of how findings from scientific publications are reported to the general public is crucial.
arXiv Detail & Related papers (2024-02-19T19:00:01Z) - Automated Fact-Checking of Climate Change Claims with Large Language
Models [3.1080484250243425]
This paper presents Climinator, a novel AI-based tool designed to automate the fact-checking of climate change claims.
Climinator employs an innovative Mediator-Advocate framework to synthesize varying scientific perspectives.
Our model demonstrates remarkable accuracy when testing claims collected from Climate Feedback and Skeptical Science.
arXiv Detail & Related papers (2024-01-23T08:49:23Z) - SciFact-Open: Towards open-domain scientific claim verification [61.288725621156864]
We present SciFact-Open, a new test collection designed to evaluate the performance of scientific claim verification systems.
We collect evidence for scientific claims by pooling and annotating the top predictions of four state-of-the-art scientific claim verification models.
We find that systems developed on smaller corpora struggle to generalize to SciFact-Open, exhibiting performance drops of at least 15 F1.
arXiv Detail & Related papers (2022-10-25T05:45:00Z) - Modeling Information Change in Science Communication with Semantically
Matched Paraphrases [50.67030449927206]
SPICED is the first paraphrase dataset of scientific findings annotated for degree of information change.
SPICED contains 6,000 scientific finding pairs extracted from news stories, social media discussions, and full texts of original papers.
Models trained on SPICED improve downstream performance on evidence retrieval for fact checking of real-world scientific claims.
arXiv Detail & Related papers (2022-10-24T07:44:38Z) - Generating Scientific Claims for Zero-Shot Scientific Fact Checking [54.62086027306609]
Automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data.
We propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences.
We also demonstrate its usefulness in zero-shot fact checking for biomedical claims.
arXiv Detail & Related papers (2022-03-24T11:29:20Z) - RerrFact: Reduced Evidence Retrieval Representations for Scientific
Claim Verification [4.052777228128475]
We propose a modular approach that sequentially carries out binary classification for every prediction subtask.
We carry out two-step stance predictions that first differentiate non-relevant rationales and then identify supporting or refuting rationales for a given claim.
Experimentally, our system RerrFact with no fine-tuning, simple design, and a fraction of model parameters fairs competitively on the leaderboard.
arXiv Detail & Related papers (2022-02-05T21:52:45Z) - CitationIE: Leveraging the Citation Graph for Scientific Information
Extraction [89.33938657493765]
We use the citation graph of referential links between citing and cited papers.
We observe a sizable improvement in end-to-end information extraction over the state-of-the-art.
arXiv Detail & Related papers (2021-06-03T03:00:12Z) - Fact or Fiction: Verifying Scientific Claims [53.29101835904273]
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that SUPPORTS or REFUTES a given scientific claim.
We construct SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts annotated with labels and rationales.
We show that our system is able to verify claims related to COVID-19 by identifying evidence from the CORD-19 corpus.
arXiv Detail & Related papers (2020-04-30T17:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.