Global Perspectives of AI Risks and Harms: Analyzing the Negative Impacts of AI Technologies as Prioritized by News Media
- URL: http://arxiv.org/abs/2501.14040v2
- Date: Thu, 06 Feb 2025 19:32:31 GMT
- Title: Global Perspectives of AI Risks and Harms: Analyzing the Negative Impacts of AI Technologies as Prioritized by News Media
- Authors: Mowafak Allaham, Kimon Kieslich, Nicholas Diakopoulos,
- Abstract summary: AI technologies have the potential to drive economic growth and innovation but can also pose significant risks to society.
One way to understand these nuances is by looking at how the media reports on AI.
We analyze a broad and diverse sample of global news media spanning 27 countries across Asia, Africa, Europe, Middle East, North America, and Oceania.
- Score: 3.2566808526538873
- License:
- Abstract: Emerging AI technologies have the potential to drive economic growth and innovation but can also pose significant risks to society. To mitigate these risks, governments, companies, and researchers have contributed regulatory frameworks, risk assessment approaches, and safety benchmarks, but these can lack nuance when considered in global deployment contexts. One way to understand these nuances is by looking at how the media reports on AI, as news media has a substantial influence on what negative impacts of AI are discussed in the public sphere and which impacts are deemed important. In this work, we analyze a broad and diverse sample of global news media spanning 27 countries across Asia, Africa, Europe, Middle East, North America, and Oceania to gain valuable insights into the risks and harms of AI technologies as reported and prioritized across media outlets in different countries. This approach reveals a skewed prioritization of Societal Risks followed by Legal & Rights-related Risks, Content Safety Risks, Cognitive Risks, Existential Risks, and Environmental Risks, as reflected in the prevalence of these risk categories in the news coverage of different nations. Furthermore, it highlights how the distribution of such concerns varies based on the political bias of news outlets, underscoring the political nature of AI risk assessment processes and public opinion. By incorporating views from various regions and political orientations for assessing the risks and harms of AI, this work presents stakeholders, such as AI developers and policy makers, with insights into the AI risks categories prioritized in the public sphere. These insights may guide the development of more inclusive, safe, and responsible AI technologies that address the diverse concerns and needs across the world.
Related papers
- Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Mapping the individual, social, and biospheric impacts of Foundation Models [0.39843531413098965]
This paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI.
We identify 14 categories of risks and harms and map them according to their individual, social, and biospheric impacts.
arXiv Detail & Related papers (2024-07-24T10:05:40Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - US-China perspectives on extreme AI risks and global governance [0.0]
We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence.
We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI)
Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control.
arXiv Detail & Related papers (2024-06-23T17:31:27Z) - Implications for Governance in Public Perceptions of Societal-scale AI Risks [0.29022435221103454]
Voters perceive AI risks as both more likely and more impactful than experts, and also advocate for slower AI development.
Policy interventions may best assuage collective concerns if they attempt to more carefully balance mitigation efforts across all classes of societal-scale risks.
arXiv Detail & Related papers (2024-06-10T11:52:25Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [7.35411010153049]
Best way to reduce risks is to implement comprehensive AI lifecycle governance.
Risks can be quantified using metrics from the technical community.
This paper explores these issues, focusing on the opportunities, challenges, and potential impacts of such an approach.
arXiv Detail & Related papers (2022-09-13T21:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.