Global Perspectives of AI Risks and Harms: Analyzing the Negative Impacts of AI Technologies as Prioritized by News Media
- URL: http://arxiv.org/abs/2501.14040v2
- Date: Thu, 06 Feb 2025 19:32:31 GMT
- Title: Global Perspectives of AI Risks and Harms: Analyzing the Negative Impacts of AI Technologies as Prioritized by News Media
- Authors: Mowafak Allaham, Kimon Kieslich, Nicholas Diakopoulos,
- Abstract summary: AI technologies have the potential to drive economic growth and innovation but can also pose significant risks to society.<n>One way to understand these nuances is by looking at how the media reports on AI.<n>We analyze a broad and diverse sample of global news media spanning 27 countries across Asia, Africa, Europe, Middle East, North America, and Oceania.
- Score: 3.2566808526538873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emerging AI technologies have the potential to drive economic growth and innovation but can also pose significant risks to society. To mitigate these risks, governments, companies, and researchers have contributed regulatory frameworks, risk assessment approaches, and safety benchmarks, but these can lack nuance when considered in global deployment contexts. One way to understand these nuances is by looking at how the media reports on AI, as news media has a substantial influence on what negative impacts of AI are discussed in the public sphere and which impacts are deemed important. In this work, we analyze a broad and diverse sample of global news media spanning 27 countries across Asia, Africa, Europe, Middle East, North America, and Oceania to gain valuable insights into the risks and harms of AI technologies as reported and prioritized across media outlets in different countries. This approach reveals a skewed prioritization of Societal Risks followed by Legal & Rights-related Risks, Content Safety Risks, Cognitive Risks, Existential Risks, and Environmental Risks, as reflected in the prevalence of these risk categories in the news coverage of different nations. Furthermore, it highlights how the distribution of such concerns varies based on the political bias of news outlets, underscoring the political nature of AI risk assessment processes and public opinion. By incorporating views from various regions and political orientations for assessing the risks and harms of AI, this work presents stakeholders, such as AI developers and policy makers, with insights into the AI risks categories prioritized in the public sphere. These insights may guide the development of more inclusive, safe, and responsible AI technologies that address the diverse concerns and needs across the world.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Informing AI Risk Assessment with News Media: Analyzing National and Political Variation in the Coverage of AI Risks [3.2566808526538873]
This work presents a comparative analysis of a cross-national sample of news media spanning 6 countries.<n>Our findings show that AI risks are prioritized differently across nations and shed light on how left vs. right leaning U.S. based outlets differ in the prioritization of AI risks in their coverage.<n>These findings can inform risk assessors and policy-makers about the nuances they should account for when considering news media as a supplementary source for risk-based governance approaches.
arXiv Detail & Related papers (2025-07-31T16:52:21Z) - An Overview of the Risk-based Model of AI Governance [0.0]
The 'Analysis' section proposes several criticisms of the risk based approach to AI governance.<n>It argues that the notion of risk is problematic as its inherent normativity reproduces dominant and harmful narratives about whose interests matter.<n>This paper concludes with the suggestion that existing risk governance scholarship can provide valuable insights toward the improvement of the risk based AI governance.
arXiv Detail & Related papers (2025-07-21T06:56:04Z) - The California Report on Frontier AI Policy [110.35302787349856]
Continued progress in frontier AI carries the potential for profound advances in scientific discovery, economic productivity, and broader social well-being.<n>As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI.<n>Report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI.
arXiv Detail & Related papers (2025-06-17T23:33:21Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Mapping the individual, social, and biospheric impacts of Foundation Models [0.39843531413098965]
This paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI.
We identify 14 categories of risks and harms and map them according to their individual, social, and biospheric impacts.
arXiv Detail & Related papers (2024-07-24T10:05:40Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - US-China perspectives on extreme AI risks and global governance [0.0]
We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence.
We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI)
Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control.
arXiv Detail & Related papers (2024-06-23T17:31:27Z) - Implications for Governance in Public Perceptions of Societal-scale AI Risks [0.29022435221103454]
Voters perceive AI risks as both more likely and more impactful than experts, and also advocate for slower AI development.
Policy interventions may best assuage collective concerns if they attempt to more carefully balance mitigation efforts across all classes of societal-scale risks.
arXiv Detail & Related papers (2024-06-10T11:52:25Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Risks of AI Scientists: Prioritizing Safeguarding Over Autonomy [65.77763092833348]
This perspective examines vulnerabilities in AI scientists, shedding light on potential risks associated with their misuse.<n>We take into account user intent, the specific scientific domain, and their potential impact on the external environment.<n>We propose a triadic framework involving human regulation, agent alignment, and an understanding of environmental feedback.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [7.35411010153049]
Best way to reduce risks is to implement comprehensive AI lifecycle governance.<n>Risks can be quantified using metrics from the technical community.<n>This paper explores these issues, focusing on the opportunities, challenges, and potential impacts of such an approach.
arXiv Detail & Related papers (2022-09-13T21:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.