Informing AI Risk Assessment with News Media: Analyzing National and Political Variation in the Coverage of AI Risks
- URL: http://arxiv.org/abs/2507.23718v1
- Date: Thu, 31 Jul 2025 16:52:21 GMT
- Title: Informing AI Risk Assessment with News Media: Analyzing National and Political Variation in the Coverage of AI Risks
- Authors: Mowafak Allaham, Kimon Kieslich, Nicholas Diakopoulos,
- Abstract summary: This work presents a comparative analysis of a cross-national sample of news media spanning 6 countries.<n>Our findings show that AI risks are prioritized differently across nations and shed light on how left vs. right leaning U.S. based outlets differ in the prioritization of AI risks in their coverage.<n>These findings can inform risk assessors and policy-makers about the nuances they should account for when considering news media as a supplementary source for risk-based governance approaches.
- Score: 3.2566808526538873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Risk-based approaches to AI governance often center the technological artifact as the primary focus of risk assessments, overlooking systemic risks that emerge from the complex interaction between AI systems and society. One potential source to incorporate more societal context into these approaches is the news media, as it embeds and reflects complex interactions between AI systems, human stakeholders, and the larger society. News media is influential in terms of which AI risks are emphasized and discussed in the public sphere, and thus which risks are deemed important. Yet, variations in the news media between countries and across different value systems (e.g. political orientations) may differentially shape the prioritization of risks through the media's agenda setting and framing processes. To better understand these variations, this work presents a comparative analysis of a cross-national sample of news media spanning 6 countries (the U.S., the U.K., India, Australia, Israel, and South Africa). Our findings show that AI risks are prioritized differently across nations and shed light on how left vs. right leaning U.S. based outlets not only differ in the prioritization of AI risks in their coverage, but also use politicized language in the reporting of these risks. These findings can inform risk assessors and policy-makers about the nuances they should account for when considering news media as a supplementary source for risk-based governance approaches.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - An Overview of the Risk-based Model of AI Governance [0.0]
The 'Analysis' section proposes several criticisms of the risk based approach to AI governance.<n>It argues that the notion of risk is problematic as its inherent normativity reproduces dominant and harmful narratives about whose interests matter.<n>This paper concludes with the suggestion that existing risk governance scholarship can provide valuable insights toward the improvement of the risk based AI governance.
arXiv Detail & Related papers (2025-07-21T06:56:04Z) - A First-Principles Based Risk Assessment Framework and the IEEE P3396 Standard [0.0]
Generative Artificial Intelligence (AI) is enabling unprecedented automation in content creation and decision support.<n>This paper presents a first-principles risk assessment framework underlying the IEEE P3396 Recommended Practice for AI Risk, Safety, Trustworthiness, and Responsibility.
arXiv Detail & Related papers (2025-03-31T18:00:03Z) - Between Innovation and Oversight: A Cross-Regional Study of AI Risk Management Frameworks in the EU, U.S., UK, and China [0.0]
This paper conducts a comparative analysis of AI risk management strategies across the European Union, United States, United Kingdom (UK), and China.<n>The findings show that the EU implements a structured, risk-based framework that prioritizes transparency and conformity assessments.<n>The U.S. uses a decentralized, sector-specific regulations that promote innovation but may lead to fragmented enforcement.
arXiv Detail & Related papers (2025-02-25T18:52:17Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.<n>In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.<n>Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Global Perspectives of AI Risks and Harms: Analyzing the Negative Impacts of AI Technologies as Prioritized by News Media [3.2566808526538873]
AI technologies have the potential to drive economic growth and innovation but can also pose significant risks to society.<n>One way to understand these nuances is by looking at how the media reports on AI.<n>We analyze a broad and diverse sample of global news media spanning 27 countries across Asia, Africa, Europe, Middle East, North America, and Oceania.
arXiv Detail & Related papers (2025-01-23T19:14:11Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - Implications for Governance in Public Perceptions of Societal-scale AI Risks [0.29022435221103454]
Voters perceive AI risks as both more likely and more impactful than experts, and also advocate for slower AI development.
Policy interventions may best assuage collective concerns if they attempt to more carefully balance mitigation efforts across all classes of societal-scale risks.
arXiv Detail & Related papers (2024-06-10T11:52:25Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - AI Alignment: A Comprehensive Survey [69.61425542486275]
AI alignment aims to make AI systems behave in line with human intentions and values.<n>We identify four principles as the key objectives of AI alignment: Robustness, Interpretability, Controllability, and Ethicality.<n>We decompose current alignment research into two key components: forward alignment and backward alignment.
arXiv Detail & Related papers (2023-10-30T15:52:15Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.