Developing Story: Case Studies of Generative AI's Use in Journalism
- URL: http://arxiv.org/abs/2406.13706v2
- Date: Tue, 03 Dec 2024 04:57:32 GMT
- Title: Developing Story: Case Studies of Generative AI's Use in Journalism
- Authors: Natalie Grace Brigham, Chongjiu Gao, Tadayoshi Kohno, Franziska Roesner, Niloofar Mireshghallah,
- Abstract summary: We conduct a study of journalist-AI interactions by two news agencies through browsing the WildChat dataset.
Our analysis uncovers instances where journalists provide sensitive material such as confidential correspondence with sources or articles from other agencies to the LLM as stimuli and prompt it to generate articles.
Based on our findings, we call for further research into what constitutes responsible use of AI, and the establishment of clear guidelines and best practices on using LLMs in a journalistic context.
- Score: 18.67676679963561
- License:
- Abstract: Journalists are among the many users of large language models (LLMs). To better understand the journalist-AI interactions, we conduct a study of LLM usage by two news agencies through browsing the WildChat dataset, identifying candidate interactions, and verifying them by matching to online published articles. Our analysis uncovers instances where journalists provide sensitive material such as confidential correspondence with sources or articles from other agencies to the LLM as stimuli and prompt it to generate articles, and publish these machine-generated articles with limited intervention (median output-publication ROUGE-L of 0.62). Based on our findings, we call for further research into what constitutes responsible use of AI, and the establishment of clear guidelines and best practices on using LLMs in a journalistic context.
Related papers
- Mind the Gap! Choice Independence in Using Multilingual LLMs for Persuasive Co-Writing Tasks in Different Languages [51.96666324242191]
We analyze whether user utilization of novel writing assistants in a charity advertisement writing task is affected by the AI's performance in a second language.
We quantify the extent to which these patterns translate into the persuasiveness of generated charity advertisements.
arXiv Detail & Related papers (2025-02-13T17:49:30Z) - "Ownership, Not Just Happy Talk": Co-Designing a Participatory Large Language Model for Journalism [7.25169954977234]
Journalism has emerged as an essential domain for understanding the uses, limitations, and impacts of large language models (LLMs) in the workplace.
How might a journalist-led LLM work, and what can participatory design illuminate about the present-day challenges about adapting one-size-fits-all'' foundation models to a given context of use?
Our 20 interviews with reporters, data journalists, editors, labor organizers, product leads, and executives highlight macro, meso, and micro tensions that designing for this opportunity space must address.
arXiv Detail & Related papers (2025-01-28T21:06:52Z) - JRE-L: Journalist, Reader, and Editor LLMs in the Loop for Science Journalism for the General Audience [3.591143309194537]
Science journalism reports current scientific discoveries to non-specialists, aiming to enable public comprehension of the state of the art.
We propose a JRE-L framework that integrates three LLMs mimicking the writing-reading-feedback-revision loop.
Our code is publicly available at accessible.com/Zzoay/JRE-L.
arXiv Detail & Related papers (2025-01-28T11:30:35Z) - NewsInterview: a Dataset and a Playground to Evaluate LLMs' Ground Gap via Informational Interviews [65.35458530702442]
We focus on journalistic interviews, a domain rich in grounding communication and abundant in data.
We curate a dataset of 40,000 two-person informational interviews from NPR and CNN.
LLMs are significantly less likely than human interviewers to use acknowledgements and to pivot to higher-level questions.
arXiv Detail & Related papers (2024-11-21T01:37:38Z) - Enhancing Journalism with AI: A Study of Contextualized Image Captioning for News Articles using LLMs and LMMs [2.1165011830664673]
Large language models (LLMs) and large multimodal models (LMMs) have significantly impacted the AI community.
This study explores how LLMs and LMMs can assist journalistic practice by generating contextualised captions for images accompanying news articles.
arXiv Detail & Related papers (2024-08-08T09:31:24Z) - LLM-Collaboration on Automatic Science Journalism for the General Audience [3.591143309194537]
Science journalism reports current scientific discoveries to non-specialists.
This task can be challenging as the audience often lacks specific knowledge about the presented research.
We propose a framework that integrates three LLMs mimicking the real-world writing-reading-feedback-revision workflow.
arXiv Detail & Related papers (2024-07-13T03:31:35Z) - Quantifying Generative Media Bias with a Corpus of Real-world and Generated News Articles [12.356251871670011]
Large language models (LLMs) are increasingly being utilised across a range of tasks and domains.
This study focuses on political bias, detecting it using both supervised models and LLMs.
For the first time within the journalistic domain, this study outlines a framework for quantifiable experiments.
arXiv Detail & Related papers (2024-06-16T01:32:04Z) - Harnessing the Power of LLMs: Evaluating Human-AI Text Co-Creation
through the Lens of News Headline Generation [58.31430028519306]
This study explores how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process.
While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs.
arXiv Detail & Related papers (2023-10-16T15:11:01Z) - Identifying Informational Sources in News Articles [109.70475599552523]
We build the largest and widest-ranging annotated dataset of informational sources used in news writing.
We introduce a novel task, source prediction, to study the compositionality of sources in news articles.
arXiv Detail & Related papers (2023-05-24T08:56:35Z) - Towards Corpus-Scale Discovery of Selection Biases in News Coverage:
Comparing What Sources Say About Entities as a Start [65.28355014154549]
This paper investigates the challenges of building scalable NLP systems for discovering patterns of media selection biases directly from news content in massive-scale news corpora.
We show the capabilities of the framework through a case study on NELA-2020, a corpus of 1.8M news articles in English from 519 news sources worldwide.
arXiv Detail & Related papers (2023-04-06T23:36:45Z) - VMSMO: Learning to Generate Multimodal Summary for Video-based News
Articles [63.32111010686954]
We propose the task of Video-based Multimodal Summarization with Multimodal Output (VMSMO)
The main challenge in this task is to jointly model the temporal dependency of video with semantic meaning of article.
We propose a Dual-Interaction-based Multimodal Summarizer (DIMS), consisting of a dual interaction module and multimodal generator.
arXiv Detail & Related papers (2020-10-12T02:19:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.