LongChecker: Improving scientific claim verification by modeling
full-abstract context
- URL: http://arxiv.org/abs/2112.01640v1
- Date: Thu, 2 Dec 2021 23:37:16 GMT
- Title: LongChecker: Improving scientific claim verification by modeling
full-abstract context
- Authors: David Wadden, Kyle Lo, Lucy Lu Wang, Arman Cohan, Iz Beltagy, Hannaneh
Hajishirzi
- Abstract summary: We introduce the LongChecker system for scientific claim verification.
Given a scientific claim and an evidence-containing research abstract, LongChecker predicts a veracity label and identifies supporting rationales.
By making labeling decisions based on all available context, LongChecker achieves better performance on cases requiring this type of understanding.
- Score: 38.73116177387815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the LongChecker system for scientific claim verification. Given
a scientific claim and an evidence-containing research abstract, LongChecker
predicts a veracity label and identifies supporting rationales in a multitask
fashion based on a shared encoding of the claim and abstract. We perform
experiments on the SciFact dataset, and find that LongChecker achieves
state-of-the-art performance. We conduct analysis to understand the source of
this improvement, and find that identifying the relationship between a claim
and a rationale reporting a scientific finding often requires understanding the
context in which the rationale appears. By making labeling decisions based on
all available context, LongChecker achieves better performance on cases
requiring this type of understanding. In addition, we show that LongChecker is
able to leverage weakly-supervised in-domain data to facilitate few-shot domain
adaptation for scientific claim verification.
Related papers
- Robust Claim Verification Through Fact Detection [17.29665711917281]
Our novel approach, FactDetect, leverages Large Language Models (LLMs) to generate concise factual statements from evidence.
The generated facts are then combined with the claim and evidence.
Our method demonstrates competitive results in the supervised claim verification model by 15% on the F1 score.
arXiv Detail & Related papers (2024-07-25T20:03:43Z) - RU22Fact: Optimizing Evidence for Multilingual Explainable Fact-Checking on Russia-Ukraine Conflict [34.2739191920746]
High-quality evidence plays a vital role in enhancing fact-checking systems.
We propose a method based on a Large Language Model to automatically retrieve and summarize evidence from the Web.
We construct RU22Fact, a novel explainable fact-checking dataset on the Russia-Ukraine conflict in 2022 of 16K samples.
arXiv Detail & Related papers (2024-03-25T11:56:29Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - SciFact-Open: Towards open-domain scientific claim verification [61.288725621156864]
We present SciFact-Open, a new test collection designed to evaluate the performance of scientific claim verification systems.
We collect evidence for scientific claims by pooling and annotating the top predictions of four state-of-the-art scientific claim verification models.
We find that systems developed on smaller corpora struggle to generalize to SciFact-Open, exhibiting performance drops of at least 15 F1.
arXiv Detail & Related papers (2022-10-25T05:45:00Z) - Generating Scientific Claims for Zero-Shot Scientific Fact Checking [54.62086027306609]
Automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data.
We propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences.
We also demonstrate its usefulness in zero-shot fact checking for biomedical claims.
arXiv Detail & Related papers (2022-03-24T11:29:20Z) - RerrFact: Reduced Evidence Retrieval Representations for Scientific
Claim Verification [4.052777228128475]
We propose a modular approach that sequentially carries out binary classification for every prediction subtask.
We carry out two-step stance predictions that first differentiate non-relevant rationales and then identify supporting or refuting rationales for a given claim.
Experimentally, our system RerrFact with no fine-tuning, simple design, and a fraction of model parameters fairs competitively on the leaderboard.
arXiv Detail & Related papers (2022-02-05T21:52:45Z) - Graph-based Retrieval for Claim Verification over Cross-Document
Evidence [0.6853165736531939]
We conjecture that a graph-based approach can be beneficial to identify fragmented evidence.
We tested this hypothesis by building, over the whole corpus, a large graph that interconnects text portions by means of mentioned entities.
Our experiments show that leveraging on a graph structure is beneficial in identifying a reasonably small portion of passages related to a claim.
arXiv Detail & Related papers (2021-09-13T14:54:26Z) - Abstract, Rationale, Stance: A Joint Model for Scientific Claim
Verification [18.330265729989843]
We propose an approach, named as ARSJoint, that jointly learns the modules for the three tasks with a machine reading comprehension framework.
The experimental results on the benchmark dataset SciFact show that our approach outperforms the existing works.
arXiv Detail & Related papers (2021-09-13T10:07:26Z) - Fact or Fiction: Verifying Scientific Claims [53.29101835904273]
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that SUPPORTS or REFUTES a given scientific claim.
We construct SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts annotated with labels and rationales.
We show that our system is able to verify claims related to COVID-19 by identifying evidence from the CORD-19 corpus.
arXiv Detail & Related papers (2020-04-30T17:22:57Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.