The Impact of Knowledge Silos on Responsible AI Practices in Journalism
- URL: http://arxiv.org/abs/2410.01138v2
- Date: Wed, 23 Oct 2024 10:58:38 GMT
- Title: The Impact of Knowledge Silos on Responsible AI Practices in Journalism
- Authors: Tomás Dodds, Astrid Vandendaele, Felix M. Simon, Natali Helberger, Valeria Resendez, Wang Ngai Yeung,
- Abstract summary: This study aims to explore if, and if so, how knowledge silos affect the adoption of responsible AI practices in journalism.
We conducted 14 semi-structured interviews with editors, managers, and journalists at de Telegraaf, de Volkskrant, the Nederlandse Omroep Stichting, and RTL Nederland.
Our results emphasize the importance of creating better structures for sharing information on AI across all layers of news organizations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The effective adoption of responsible AI practices in journalism requires a concerted effort to bridge different perspectives, including technological, editorial, journalistic, and managerial. Among the many challenges that could impact information sharing around responsible AI inside news organizations are knowledge silos, where information is isolated within one part of the organization and not easily shared with others. This study aims to explore if, and if so, how, knowledge silos affect the adoption of responsible AI practices in journalism through a cross-case study of four major Dutch media outlets. We examine the individual and organizational barriers to AI knowledge sharing and the extent to which knowledge silos could impede the operationalization of responsible AI initiatives inside newsrooms. To address this question, we conducted 14 semi-structured interviews with editors, managers, and journalists at de Telegraaf, de Volkskrant, the Nederlandse Omroep Stichting (NOS), and RTL Nederland. The interviews aimed to uncover insights into the existence of knowledge silos, their effects on responsible AI practice adoption, and the organizational practices influencing these dynamics. Our results emphasize the importance of creating better structures for sharing information on AI across all layers of news organizations.
Related papers
- "It Might be Technically Impressive, But It's Practically Useless to Us": Practices, Challenges, and Opportunities for Cross-Functional Collaboration around AI within the News Industry [7.568817736131254]
An increasing number of news organizations have integrated artificial intelligence (AI) into their operations.
This has initiated cross-functional collaborations between these professionals and journalists.
This study investigates the current practices, challenges, and opportunities for cross-functional collaboration around AI in today's news industry.
arXiv Detail & Related papers (2024-09-18T14:12:01Z) - Disclosure of AI-Generated News Increases Engagement but Does Not Reduce Aversion, Despite Positive Quality Ratings [3.036383058306671]
This study investigates the perceived quality of AI-assisted and AI-generated versus human-generated news articles.
We employ a survey experiment with 599 participants from the German-speaking part of Switzerland.
Our results indicate that all news articles, regardless of whether they were written by journalists or AI, were perceived to be of equal quality.
arXiv Detail & Related papers (2024-09-05T13:12:16Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - The Impact and Opportunities of Generative AI in Fact-Checking [12.845170214324662]
Generative AI appears poised to transform white collar professions, with more than 90% of Fortune 500 companies using OpenAI's flagship GPT models.
But how will such technologies impact organizations whose job is to verify and report factual information?
We conducted 30 interviews with N=38 participants working at 29 fact-checking organizations across six continents.
arXiv Detail & Related papers (2024-05-24T23:58:01Z) - Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media [0.0]
This study analyzes 37 AI guidelines for media purposes in 17 countries.
Our analysis reveals key thematic areas, such as transparency, accountability, fairness, privacy, and the preservation of journalistic values.
Results highlight shared principles and best practices that emerge from these guidelines.
arXiv Detail & Related papers (2024-05-07T22:47:56Z) - The Global Impact of AI-Artificial Intelligence: Recent Advances and
Future Directions, A Review [0.0]
The article highlights the implications of AI, including its impact on economic, ethical, social, security & privacy, and job displacement aspects.
It discusses the ethical concerns surrounding AI development, including issues of bias, security, and privacy violations.
The article concludes by emphasizing the importance of public engagement and education to promote awareness and understanding of AI's impact on society at large.
arXiv Detail & Related papers (2023-12-22T00:41:21Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Empowering Local Communities Using Artificial Intelligence [70.17085406202368]
It has become an important topic to explore the impact of AI on society from a people-centered perspective.
Previous works in citizen science have identified methods of using AI to engage the public in research.
This article discusses the challenges of applying AI in Community Citizen Science.
arXiv Detail & Related papers (2021-10-05T12:51:11Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.