The State of Human-centered NLP Technology for Fact-checking
- URL: http://arxiv.org/abs/2301.03056v1
- Date: Sun, 8 Jan 2023 15:13:13 GMT
- Title: The State of Human-centered NLP Technology for Fact-checking
- Authors: Anubrata Das, Houjiang Liu, Venelin Kovatchev, Matthew Lease
- Abstract summary: Misinformation threatens modern society by promoting distrust in science, changing narratives in public health, and disrupting democratic elections and financial markets.
A growing body of Natural Language Processing (NLP) technologies have been proposed for more scalable fact-checking.
Despite tremendous growth in such research, practical adoption of NLP technologies for fact-checking still remains in its infancy today.
- Score: 7.866556977836075
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Misinformation threatens modern society by promoting distrust in science,
changing narratives in public health, heightening social polarization, and
disrupting democratic elections and financial markets, among a myriad of other
societal harms. To address this, a growing cadre of professional fact-checkers
and journalists provide high-quality investigations into purported facts.
However, these largely manual efforts have struggled to match the enormous
scale of the problem. In response, a growing body of Natural Language
Processing (NLP) technologies have been proposed for more scalable
fact-checking. Despite tremendous growth in such research, however, practical
adoption of NLP technologies for fact-checking still remains in its infancy
today.
In this work, we review the capabilities and limitations of the current NLP
technologies for fact-checking. Our particular focus is to further chart the
design space for how these technologies can be harnessed and refined in order
to better meet the needs of human fact-checkers. To do so, we review key
aspects of NLP-based fact-checking: task formulation, dataset construction,
modeling, and human-centered strategies, such as explainable models and
human-in-the-loop approaches. Next, we review the efficacy of applying
NLP-based fact-checking tools to assist human fact-checkers. We recommend that
future research include collaboration with fact-checker stakeholders early on
in NLP research, as well as incorporation of human-centered design practices in
model development, in order to further guide technology development for human
use and practical adoption. Finally, we advocate for more research on benchmark
development supporting extrinsic evaluation of human-centered fact-checking
technologies.
Related papers
- The Impact and Opportunities of Generative AI in Fact-Checking [12.845170214324662]
Generative AI appears poised to transform white collar professions, with more than 90% of Fortune 500 companies using OpenAI's flagship GPT models.
But how will such technologies impact organizations whose job is to verify and report factual information?
We conducted 30 interviews with N=38 participants working at 29 fact-checking organizations across six continents.
arXiv Detail & Related papers (2024-05-24T23:58:01Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using
Matchmaking for AI [48.4080434241563]
We investigate a co-design method, Matchmaking for AI, to enable fact-checkers, designers, and NLP researchers to collaboratively identify what fact-checker needs should be addressed by technology.
Co-design sessions we conducted with 22 professional fact-checkers yielded a set of 11 design ideas that offer a "north star"
Our work provides new insights into both human-centered fact-checking research and practice and AI co-design research.
arXiv Detail & Related papers (2023-08-14T15:31:32Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Scientific Fact-Checking: A Survey of Resources and Approaches [0.799536002595393]
scientific fact-checking is the variation of the task concerned with verifying claims rooted in scientific knowledge.
This task has received significant attention due to the growing importance of scientific and health discussions on online platforms.
We present a comprehensive survey of existing research in this emerging field and its related tasks.
arXiv Detail & Related papers (2023-05-26T12:12:15Z) - Automated, not Automatic: Needs and Practices in European Fact-checking
Organizations as a basis for Designing Human-centered AI Systems [0.7874708385247353]
Despite existing research, there is still a gap between the fact-checking practitioners' needs and the current AI research.
In this study, we conducted semi-structured in-depth interviews with Central European fact-checkers.
The information behavior and requirements on desired supporting tools were analyzed using iterative bottom-up content analysis.
arXiv Detail & Related papers (2022-11-22T10:18:09Z) - Systematic Inequalities in Language Technology Performance across the
World's Languages [94.65681336393425]
We introduce a framework for estimating the global utility of language technologies.
Our analyses involve the field at large, but also more in-depth studies on both user-facing technologies and more linguistic NLP tasks.
arXiv Detail & Related papers (2021-10-13T14:03:07Z) - Ensuring the Inclusive Use of Natural Language Processing in the Global
Response to COVID-19 [58.720142291102135]
We discuss ways in which current and future NLP approaches can be made more inclusive by covering low-resource languages.
We suggest several future directions for researchers interested in maximizing the positive societal impacts of NLP.
arXiv Detail & Related papers (2021-08-11T12:54:26Z) - How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social
Impact [31.435252562175194]
We propose a framework to evaluate NLP tasks' direct and indirect real-world impact.
We adopt the methodology of global priorities research to identify priority causes for NLP research.
Finally, we use our theoretical framework to provide some practical guidelines for future NLP research for social good.
arXiv Detail & Related papers (2021-06-04T09:17:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.