Willingness to Read AI-Generated News Is Not Driven by Their Perceived Quality
- URL: http://arxiv.org/abs/2409.03500v3
- Date: Fri, 14 Feb 2025 10:39:01 GMT
- Title: Willingness to Read AI-Generated News Is Not Driven by Their Perceived Quality
- Authors: Fabrizio Gilardi, Sabrina Di Lorenzo, Juri Ezzaini, Beryl Santa, Benjamin Streiff, Eric Zurfluh, Emma Hoes,
- Abstract summary: This study investigates the perceived quality of AI-assisted and AI-generated versus human-generated news articles.
It also investigates whether disclosure of AI's involvement in generating these news articles influences engagement with them.
- Score: 3.036383058306671
- License:
- Abstract: The advancement of artificial intelligence has led to its application in many areas, including news media, which makes it crucial to understand public reception of AI-generated news. This preregistered study investigates (i) the perceived quality of AI-assisted and AI-generated versus human-generated news articles, (ii) whether disclosure of AI's involvement in generating these news articles influences engagement with them, and (iii) whether such awareness affects the willingness to read AI-generated articles in the future. We conducted a survey experiment with 599 Swiss participants, who evaluated the credibility, readability, and expertise of news articles either written by journalists (control group), rewritten by AI (AI-assisted group), or entirely written by AI (AI-generated group). Our results indicate that all articles were perceived to be of equal quality. When participants in the treatment groups were subsequently made aware of AI's role, they expressed a higher willingness to continue reading the articles than participants in the control group. However, they were not more willing to read AI-generated news in the future. These results suggest that aversion to AI usage in news media is not primarily rooted in a perceived lack of quality, and that by disclosing using AI, journalists could induce more short-term engagement.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - The Impact of Knowledge Silos on Responsible AI Practices in Journalism [0.0]
This study aims to explore if, and if so, how knowledge silos affect the adoption of responsible AI practices in journalism.
We conducted 14 semi-structured interviews with editors, managers, and journalists at de Telegraaf, de Volkskrant, the Nederlandse Omroep Stichting, and RTL Nederland.
Our results emphasize the importance of creating better structures for sharing information on AI across all layers of news organizations.
arXiv Detail & Related papers (2024-10-02T00:27:01Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Artificial Intelligence Index Report 2024 [15.531650534547945]
The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI)
The AI Index is recognized globally as one of the most credible and authoritative sources for data and insights on AI.
This year's edition surpasses all previous ones in size, scale, and scope, reflecting the growing significance that AI is coming to hold in all of our lives.
arXiv Detail & Related papers (2024-05-29T20:59:57Z) - J-Guard: Journalism Guided Adversarially Robust Detection of
AI-generated News [12.633638679020903]
We develop a framework, J-Guard, capable of steering existing supervised AI text detectors for detecting AI-generated news.
By incorporating stylistic cues inspired by the unique journalistic attributes, J-Guard effectively distinguishes between real-world journalism and AI-generated news articles.
arXiv Detail & Related papers (2023-09-06T17:06:31Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - AI and the Sense of Self [0.0]
We focus on the cognitive sense of "self" and its role in autonomous decision-making leading to responsible behaviour.
Authors hope to make a case for greater research interest in building richer computational models of AI agents with a sense of self.
arXiv Detail & Related papers (2022-01-07T10:54:06Z) - Measuring Ethics in AI with AI: A Methodology and Dataset Construction [1.6861004263551447]
We propose to use such newfound capabilities of AI technologies to augment our AI measuring capabilities.
We do so by training a model to classify publications related to ethical issues and concerns.
We highlight the implications of AI metrics, in particular their contribution towards developing trustful and fair AI-based tools and technologies.
arXiv Detail & Related papers (2021-07-26T00:26:12Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.