Automatic Claim Review for Climate Science via Explanation Generation
- URL: http://arxiv.org/abs/2107.14740v1
- Date: Fri, 30 Jul 2021 16:37:45 GMT
- Title: Automatic Claim Review for Climate Science via Explanation Generation
- Authors: Shraey Bhatia, Jey Han Lau, Timothy Baldwin
- Abstract summary: Scientists and experts have been trying to address it by providing manually written feedback for these claims.
We deploy the approach used in open domain question answering of a fusion in decoder augmented with retrieved supporting passages from an external knowledge.
We experiment with different knowledge sources, retrievers, retriever depths and demonstrate that even a small number of high quality manually written explanations can help us in generating good explanations.
- Score: 33.44370581827454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is unison is the scientific community about human induced climate
change. Despite this, we see the web awash with claims around climate change
scepticism, thus driving the need for fact checking them but at the same time
providing an explanation and justification for the fact check. Scientists and
experts have been trying to address it by providing manually written feedback
for these claims. In this paper, we try to aid them by automating generating
explanation for a predicted veracity label for a claim by deploying the
approach used in open domain question answering of a fusion in decoder
augmented with retrieved supporting passages from an external knowledge. We
experiment with different knowledge sources, retrievers, retriever depths and
demonstrate that even a small number of high quality manually written
explanations can help us in generating good explanations.
Related papers
- Evaluating Transparency of Machine Generated Fact Checking Explanations [48.776087871960584]
We investigate the impact of human-curated vs. machine-selected evidence for explanation generation using large language models.
Surprisingly, we found that large language models generate similar or higher quality explanations using machine-selected evidence.
arXiv Detail & Related papers (2024-06-18T14:13:13Z) - Detecting Fallacies in Climate Misinformation: A Technocognitive Approach to Identifying Misleading Argumentation [0.6496783221842394]
We develop a dataset mapping different types of climate misinformation to reasoning fallacies.
This dataset is used to train a model to detect fallacies in climate misinformation.
arXiv Detail & Related papers (2024-05-14T01:01:44Z) - Automated Fact-Checking of Climate Change Claims with Large Language
Models [3.1080484250243425]
This paper presents Climinator, a novel AI-based tool designed to automate the fact-checking of climate change claims.
Climinator employs an innovative Mediator-Advocate framework to synthesize varying scientific perspectives.
Our model demonstrates remarkable accuracy when testing claims collected from Climate Feedback and Skeptical Science.
arXiv Detail & Related papers (2024-01-23T08:49:23Z) - QACHECK: A Demonstration System for Question-Guided Multi-Hop
Fact-Checking [68.06355980166053]
We propose the Question-guided Multi-hop Fact-Checking (QACHECK) system.
It guides the model's reasoning process by asking a series of questions critical for verifying a claim.
It provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process.
arXiv Detail & Related papers (2023-10-11T15:51:53Z) - Ask to Know More: Generating Counterfactual Explanations for Fake Claims [11.135087647482145]
We propose elucidating fact checking predictions using counterfactual explanations to help people understand why a piece of news was identified as fake.
In this work, generating counterfactual explanations for fake news involves three steps: asking good questions, finding contradictions, and reasoning appropriately.
Results suggest that the proposed approach generates the most helpful explanations compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T04:42:00Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - SciClops: Detecting and Contextualizing Scientific Claims for Assisting
Manual Fact-Checking [7.507186058512835]
This paper describes SciClops, a method to help combat online scientific misinformation.
SciClops involves three main steps to process scientific claims found in online news articles and social media postings.
It effectively assists non-expert fact-checkers in the verification of complex scientific claims, outperforming commercial fact-checking systems.
arXiv Detail & Related papers (2021-10-25T16:35:58Z) - Explainable Fact-checking through Question Answering [17.1138216746642]
We propose generating questions and answers from claims and answering the same questions from evidence.
We also propose an answer comparison model with an attention mechanism attached to each question.
Experimental results show that the proposed model can achieve state-of-the-art performance while providing reasonable explainable capabilities.
arXiv Detail & Related papers (2021-10-11T15:55:11Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Fact or Fiction: Verifying Scientific Claims [53.29101835904273]
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that SUPPORTS or REFUTES a given scientific claim.
We construct SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts annotated with labels and rationales.
We show that our system is able to verify claims related to COVID-19 by identifying evidence from the CORD-19 corpus.
arXiv Detail & Related papers (2020-04-30T17:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.