Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using
Matchmaking for AI
- URL: http://arxiv.org/abs/2308.07213v2
- Date: Tue, 23 Jan 2024 04:59:29 GMT
- Title: Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using
Matchmaking for AI
- Authors: Houjiang Liu, Anubrata Das, Alexander Boltz, Didi Zhou, Daisy Pinaroc,
Matthew Lease, Min Kyung Lee
- Abstract summary: We investigate a co-design method, Matchmaking for AI, to enable fact-checkers, designers, and NLP researchers to collaboratively identify what fact-checker needs should be addressed by technology.
Co-design sessions we conducted with 22 professional fact-checkers yielded a set of 11 design ideas that offer a "north star"
Our work provides new insights into both human-centered fact-checking research and practice and AI co-design research.
- Score: 48.4080434241563
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While many Natural Language Processing (NLP) techniques have been proposed
for fact-checking, both academic research and fact-checking organizations
report limited adoption of such NLP work due to poor alignment with
fact-checker practices, values, and needs. To address this, we investigate a
co-design method, Matchmaking for AI, to enable fact-checkers, designers, and
NLP researchers to collaboratively identify what fact-checker needs should be
addressed by technology, and to brainstorm ideas for potential solutions.
Co-design sessions we conducted with 22 professional fact-checkers yielded a
set of 11 design ideas that offer a "north star", integrating fact-checker
criteria into novel NLP design concepts. These concepts range from pre-bunking
misinformation, efficient and personalized monitoring misinformation,
proactively reducing fact-checker potential biases, and collaborative writing
fact-check reports. Our work provides new insights into both human-centered
fact-checking research and practice and AI co-design research.
Related papers
- The Impact and Opportunities of Generative AI in Fact-Checking [12.845170214324662]
Generative AI appears poised to transform white collar professions, with more than 90% of Fortune 500 companies using OpenAI's flagship GPT models.
But how will such technologies impact organizations whose job is to verify and report factual information?
We conducted 30 interviews with N=38 participants working at 29 fact-checking organizations across six continents.
arXiv Detail & Related papers (2024-05-24T23:58:01Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - The Shifted and The Overlooked: A Task-oriented Investigation of
User-GPT Interactions [114.67699010359637]
We analyze a large-scale collection of real user queries to GPT.
We find that tasks such as design'' and planning'' are prevalent in user interactions but are largely neglected or different from traditional NLP benchmarks.
arXiv Detail & Related papers (2023-10-19T02:12:17Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - The State of Human-centered NLP Technology for Fact-checking [7.866556977836075]
Misinformation threatens modern society by promoting distrust in science, changing narratives in public health, and disrupting democratic elections and financial markets.
A growing body of Natural Language Processing (NLP) technologies have been proposed for more scalable fact-checking.
Despite tremendous growth in such research, practical adoption of NLP technologies for fact-checking still remains in its infancy today.
arXiv Detail & Related papers (2023-01-08T15:13:13Z) - Automated, not Automatic: Needs and Practices in European Fact-checking
Organizations as a basis for Designing Human-centered AI Systems [0.7874708385247353]
Despite existing research, there is still a gap between the fact-checking practitioners' needs and the current AI research.
In this study, we conducted semi-structured in-depth interviews with Central European fact-checkers.
The information behavior and requirements on desired supporting tools were analyzed using iterative bottom-up content analysis.
arXiv Detail & Related papers (2022-11-22T10:18:09Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.