Scientific Fact-Checking: A Survey of Resources and Approaches
- URL: http://arxiv.org/abs/2305.16859v1
- Date: Fri, 26 May 2023 12:12:15 GMT
- Title: Scientific Fact-Checking: A Survey of Resources and Approaches
- Authors: Juraj Vladika, Florian Matthes
- Abstract summary: scientific fact-checking is the variation of the task concerned with verifying claims rooted in scientific knowledge.
This task has received significant attention due to the growing importance of scientific and health discussions on online platforms.
We present a comprehensive survey of existing research in this emerging field and its related tasks.
- Score: 0.799536002595393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of fact-checking deals with assessing the veracity of factual claims
based on credible evidence and background knowledge. In particular, scientific
fact-checking is the variation of the task concerned with verifying claims
rooted in scientific knowledge. This task has received significant attention
due to the growing importance of scientific and health discussions on online
platforms. Automated scientific fact-checking methods based on NLP can help
combat the spread of misinformation, assist researchers in knowledge discovery,
and help individuals understand new scientific breakthroughs. In this paper, we
present a comprehensive survey of existing research in this emerging field and
its related tasks. We provide a task description, discuss the construction
process of existing datasets, and analyze proposed models and approaches. Based
on our findings, we identify intriguing challenges and outline potential future
directions to advance the field.
Related papers
- Machine Understanding of Scientific Language [3.094414120832024]
This thesis is concerned with the cultivation of datasets, methods, and tools for machine understanding of scientific language.<n>I present several contributions in three areas of natural language processing and machine learning: automatic fact checking, learning with limited data, and scientific text processing.
arXiv Detail & Related papers (2025-06-30T15:55:10Z) - ScienceMeter: Tracking Scientific Knowledge Updates in Language Models [79.33626657942169]
Large Language Models (LLMs) are increasingly used to support scientific research, but their knowledge of scientific advancements can quickly become outdated.<n>We introduce ScienceMeter, a new framework for evaluating scientific knowledge update methods over scientific knowledge spanning the past, present, and future.
arXiv Detail & Related papers (2025-05-30T07:28:20Z) - Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation [58.064940977804596]
A plethora of new AI models and tools has been proposed, promising to empower researchers and academics worldwide to conduct their research more effectively and efficiently.
Ethical concerns regarding shortcomings of these tools and potential for misuse take a particularly prominent place in our discussion.
arXiv Detail & Related papers (2025-02-07T18:26:45Z) - Health Misinformation in Social Networks: A Survey of IT Approaches [2.1440886607229563]
Survey aims at providing a systematic review of related research.
We first present manual and automatic approaches for fact-checking.
We then explore fake news detection methods, using content, propagation features, or source features, as well as mitigation approaches for countering the spread of misinformation.
arXiv Detail & Related papers (2024-10-24T12:00:51Z) - A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery [68.48094108571432]
Large language models (LLMs) have revolutionized the way text and other modalities of data are handled.
We aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs.
arXiv Detail & Related papers (2024-06-16T08:03:24Z) - A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why? [84.46288849132634]
We propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.
We define three variables to encompass diverse facets of the evolution of research topics within NLP.
We utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data.
arXiv Detail & Related papers (2023-05-22T11:08:00Z) - How Data Scientists Review the Scholarly Literature [4.406926847270567]
We examine the literature review practices of data scientists.
Data science represents a field seeing an exponential rise in papers.
No prior work has examined the specific practices and challenges faced by these scientists.
arXiv Detail & Related papers (2023-01-10T03:53:05Z) - The State of Human-centered NLP Technology for Fact-checking [7.866556977836075]
Misinformation threatens modern society by promoting distrust in science, changing narratives in public health, and disrupting democratic elections and financial markets.
A growing body of Natural Language Processing (NLP) technologies have been proposed for more scalable fact-checking.
Despite tremendous growth in such research, practical adoption of NLP technologies for fact-checking still remains in its infancy today.
arXiv Detail & Related papers (2023-01-08T15:13:13Z) - Modeling Information Change in Science Communication with Semantically
Matched Paraphrases [50.67030449927206]
SPICED is the first paraphrase dataset of scientific findings annotated for degree of information change.
SPICED contains 6,000 scientific finding pairs extracted from news stories, social media discussions, and full texts of original papers.
Models trained on SPICED improve downstream performance on evidence retrieval for fact checking of real-world scientific claims.
arXiv Detail & Related papers (2022-10-24T07:44:38Z) - KnowledgeShovel: An AI-in-the-Loop Document Annotation System for
Scientific Knowledge Base Construction [46.56643271476249]
KnowledgeShovel is an Al-in-the-Loop document annotation system for researchers to construct scientific knowledge bases.
The design of KnowledgeShovel introduces a multi-step multi-modalAI collaboration pipeline to improve data accuracy while reducing the human burden.
A follow-up user evaluation with 7 geoscience researchers shows that KnowledgeShovel can enable efficient construction of scientific knowledge bases with satisfactory accuracy.
arXiv Detail & Related papers (2022-10-06T11:38:18Z) - Research on Domain Information Mining and Theme Evolution of Scientific
Papers [5.747583451398117]
Cross-disciplinary research results have gradually become an emerging frontier research direction.
How to effectively use the huge number of scientific papers to help researchers becomes a challenge.
arXiv Detail & Related papers (2022-04-18T14:36:17Z) - Generating Scientific Claims for Zero-Shot Scientific Fact Checking [54.62086027306609]
Automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data.
We propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences.
We also demonstrate its usefulness in zero-shot fact checking for biomedical claims.
arXiv Detail & Related papers (2022-03-24T11:29:20Z) - CitationIE: Leveraging the Citation Graph for Scientific Information
Extraction [89.33938657493765]
We use the citation graph of referential links between citing and cited papers.
We observe a sizable improvement in end-to-end information extraction over the state-of-the-art.
arXiv Detail & Related papers (2021-06-03T03:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.