Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI
- URL: http://arxiv.org/abs/2308.07213v3
- Date: Mon, 14 Oct 2024 18:04:55 GMT
- Title: Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI
- Authors: Houjiang Liu, Anubrata Das, Alexander Boltz, Didi Zhou, Daisy Pinaroc, Matthew Lease, Min Kyung Lee,
- Abstract summary: We investigate a co-design method, Matchmaking for AI, to enable fact-checkers, designers, and NLP researchers to collaboratively identify what fact-checker needs should be addressed by technology.
Co-design sessions we conducted with 22 professional fact-checkers yielded a set of 11 design ideas that offer a "north star"
Our work provides new insights into both human-centered fact-checking research and practice and AI co-design research.
- Score: 46.40919004160953
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While many Natural Language Processing (NLP) techniques have been proposed for fact-checking, both academic research and fact-checking organizations report limited adoption of such NLP work due to poor alignment with fact-checker practices, values, and needs. To address this, we investigate a co-design method, Matchmaking for AI, to enable fact-checkers, designers, and NLP researchers to collaboratively identify what fact-checker needs should be addressed by technology, and to brainstorm ideas for potential solutions. Co-design sessions we conducted with 22 professional fact-checkers yielded a set of 11 design ideas that offer a "north star", integrating fact-checker criteria into novel NLP design concepts. These concepts range from pre-bunking misinformation, efficient and personalized monitoring misinformation, proactively reducing fact-checker potential biases, and collaborative writing fact-check reports. Our work provides new insights into both human-centered fact-checking research and practice and AI co-design research.
Related papers
- Can LLMs Automate Fact-Checking Article Writing? [69.90165567819656]
We argue for the need to extend the typical automatic fact-checking pipeline with automatic generation of full fact-checking articles.
We develop QRAFT, an LLM-based agentic framework that mimics the writing workflow of human fact-checkers.
arXiv Detail & Related papers (2025-03-22T07:56:50Z) - Exploring Multidimensional Checkworthiness: Designing AI-assisted Claim Prioritization for Human Fact-checkers [5.22980614912553]
We develop an AI-assisted claim prioritization prototype as a probe to explore how fact-checkers use multidimensional checkworthy factors to prioritize claims.
We uncover a hierarchical prioritization strategy fact-checkers implicitly use, revealing an underexplored aspect of their workflow.
arXiv Detail & Related papers (2024-12-11T08:24:15Z) - On Evaluating Explanation Utility for Human-AI Decision Making in NLP [39.58317527488534]
We review existing metrics suitable for application-grounded evaluation.
We demonstrate the importance of reassessing the state of the art to form and study human-AI teams.
arXiv Detail & Related papers (2024-07-03T23:53:27Z) - The Impact and Opportunities of Generative AI in Fact-Checking [12.845170214324662]
Generative AI appears poised to transform white collar professions, with more than 90% of Fortune 500 companies using OpenAI's flagship GPT models.
But how will such technologies impact organizations whose job is to verify and report factual information?
We conducted 30 interviews with N=38 participants working at 29 fact-checking organizations across six continents.
arXiv Detail & Related papers (2024-05-24T23:58:01Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - The Shifted and The Overlooked: A Task-oriented Investigation of
User-GPT Interactions [114.67699010359637]
We analyze a large-scale collection of real user queries to GPT.
We find that tasks such as design'' and planning'' are prevalent in user interactions but are largely neglected or different from traditional NLP benchmarks.
arXiv Detail & Related papers (2023-10-19T02:12:17Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - The State of Human-centered NLP Technology for Fact-checking [7.866556977836075]
Misinformation threatens modern society by promoting distrust in science, changing narratives in public health, and disrupting democratic elections and financial markets.
A growing body of Natural Language Processing (NLP) technologies have been proposed for more scalable fact-checking.
Despite tremendous growth in such research, practical adoption of NLP technologies for fact-checking still remains in its infancy today.
arXiv Detail & Related papers (2023-01-08T15:13:13Z) - Autonomation, not Automation: Activities and Needs of Fact-checkers as a Basis for Designing Human-Centered AI Systems [1.7925621668797338]
We conducted in-depth interviews with Central European fact-checkers.
Our contributions include an in-depth examination of the variability of fact-checking work in non-English speaking regions.
Thanks to the interdisciplinary collaboration, we extend the fact-checking process in AI research by three additional stages.
arXiv Detail & Related papers (2022-11-22T10:18:09Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.