Disclosure of AI-Generated News Increases Engagement but Does Not Reduce Aversion, Despite Positive Quality Ratings
- URL: http://arxiv.org/abs/2409.03500v2
- Date: Fri, 15 Nov 2024 15:42:46 GMT
- Title: Disclosure of AI-Generated News Increases Engagement but Does Not Reduce Aversion, Despite Positive Quality Ratings
- Authors: Fabrizio Gilardi, Sabrina Di Lorenzo, Juri Ezzaini, Beryl Santa, Benjamin Streiff, Eric Zurfluh, Emma Hoes,
- Abstract summary: The integration of AI in journalism presents both opportunities and risks for democracy.
This study investigates the perceived quality of AI-assisted and AI-generated versus human-generated news articles.
- Score: 3.036383058306671
- License:
- Abstract: The advancement of artificial intelligence (AI) has led to its application in many areas, including news media. The integration of AI in journalism presents both opportunities and risks for democracy, making it crucial to understand public reception of and engagement with AI-generated news, as it may directly influence political knowledge and trust. This preregistered study investigates (i) the perceived quality of AI-assisted and AI-generated versus human-generated news articles, (ii) whether disclosure of AI's involvement in generating these news articles influences engagement with them, and (iii) whether such awareness affects the willingness to read AI-generated articles in the future. We employed a between-subjects survey experiment with 599 participants from the German-speaking part of Switzerland, who evaluated the credibility, readability, and expertise of news articles. These articles were either written by journalists (control group), rewritten by AI (AI-assisted group), or entirely generated by AI (AI-generated group). Our results indicate that all news articles, regardless of whether they were written by journalists or AI, were perceived to be of equal quality. When participants in the treatment groups were subsequently made aware of AI's involvement in generating the articles, they expressed a higher willingness to engage with (i.e., continue reading) the articles than participants in the control group. However, they were not more willing to read AI-generated news in the future. These results suggest that aversion to AI usage in news media is not primarily rooted in a perceived lack of quality, and that by disclosing using AI, journalists could attract more immediate engagement with their content, at least in the short term.
Related papers
- Artificial Intelligence in Brazilian News: A Mixed-Methods Analysis [0.0]
This study analyzes 3,560 news articles from Brazilian media published between July 1, 2023, and February 29, 2024, from 13 popular online news outlets.
The findings reveal that Brazilian news coverage of AI is dominated by topics related to applications in the workplace and product launches.
The analysis also highlights a significant presence of industry-related entities, indicating a strong influence of corporate agendas in the country's news.
arXiv Detail & Related papers (2024-10-22T20:52:51Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - The Impact of Knowledge Silos on Responsible AI Practices in Journalism [0.0]
This study aims to explore if, and if so, how knowledge silos affect the adoption of responsible AI practices in journalism.
We conducted 14 semi-structured interviews with editors, managers, and journalists at de Telegraaf, de Volkskrant, the Nederlandse Omroep Stichting, and RTL Nederland.
Our results emphasize the importance of creating better structures for sharing information on AI across all layers of news organizations.
arXiv Detail & Related papers (2024-10-02T00:27:01Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Artificial Intelligence Index Report 2024 [15.531650534547945]
The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI)
The AI Index is recognized globally as one of the most credible and authoritative sources for data and insights on AI.
This year's edition surpasses all previous ones in size, scale, and scope, reflecting the growing significance that AI is coming to hold in all of our lives.
arXiv Detail & Related papers (2024-05-29T20:59:57Z) - The Global Impact of AI-Artificial Intelligence: Recent Advances and
Future Directions, A Review [0.0]
The article highlights the implications of AI, including its impact on economic, ethical, social, security & privacy, and job displacement aspects.
It discusses the ethical concerns surrounding AI development, including issues of bias, security, and privacy violations.
The article concludes by emphasizing the importance of public engagement and education to promote awareness and understanding of AI's impact on society at large.
arXiv Detail & Related papers (2023-12-22T00:41:21Z) - J-Guard: Journalism Guided Adversarially Robust Detection of
AI-generated News [12.633638679020903]
We develop a framework, J-Guard, capable of steering existing supervised AI text detectors for detecting AI-generated news.
By incorporating stylistic cues inspired by the unique journalistic attributes, J-Guard effectively distinguishes between real-world journalism and AI-generated news articles.
arXiv Detail & Related papers (2023-09-06T17:06:31Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.