A Closer Look at the Existing Risks of Generative AI: Mapping the Who, What, and How of Real-World Incidents
- URL: http://arxiv.org/abs/2505.22073v2
- Date: Mon, 02 Jun 2025 19:08:46 GMT
- Title: A Closer Look at the Existing Risks of Generative AI: Mapping the Who, What, and How of Real-World Incidents
- Authors: Megan Li, Wendy Bickersteth, Ningjing Tang, Jason Hong, Lorrie Cranor, Hong Shen, Hoda Heidari,
- Abstract summary: We construct a taxonomy specifically for Generative AI failures and map them to the harms they precipitate.<n>We report the prevalence of each type of harm, underlying failure mode, and harmed stakeholder, as well as their common co-occurrences.<n>Our work offers actionable insights to policymakers, developers, and Generative AI users.
- Score: 11.147381548503965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to its general-purpose nature, Generative AI is applied in an ever-growing set of domains and tasks, leading to an expanding set of risks of harm impacting people, communities, society, and the environment. These risks may arise due to failures during the design and development of the technology, as well as during its release, deployment, or downstream usages and appropriations of its outputs. In this paper, building on prior taxonomies of AI risks, harms, and failures, we construct a taxonomy specifically for Generative AI failures and map them to the harms they precipitate. Through a systematic analysis of 499 publicly reported incidents, we describe what harms are reported, how they arose, and who they impact. We report the prevalence of each type of harm, underlying failure mode, and harmed stakeholder, as well as their common co-occurrences. We find that most reported incidents are caused by use-related issues but bring harm to parties beyond the end user(s) of the Generative AI system at fault, and that the landscape of Generative AI harms is distinct from that of traditional AI. Our work offers actionable insights to policymakers, developers, and Generative AI users. In particular, we call for the prioritization of non-technical risk and harm mitigation strategies, including public disclosures and education and careful regulatory stances.
Related papers
- Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.<n>In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.<n>Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Lessons for Editors of AI Incidents from the AI Incident Database [2.5165775267615205]
The AI Incident Database (AIID) is a project that catalogs AI incidents and supports further research by providing a platform to classify incidents.
This study reviews the AIID's dataset of 750+ AI incidents and two independent ambiguities applied to these incidents to identify common challenges to indexing and analyzing AI incidents.
We report mitigations to make incident processes more robust to uncertainty related to cause, extent of harm, severity, or technical details of implicated systems.
arXiv Detail & Related papers (2024-09-24T19:46:58Z) - Mapping the individual, social, and biospheric impacts of Foundation Models [0.39843531413098965]
This paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI.
We identify 14 categories of risks and harms and map them according to their individual, social, and biospheric impacts.
arXiv Detail & Related papers (2024-07-24T10:05:40Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators [2.500481442438427]
We analyse speech generation incidents to study how patterns of specific harms arise.
We propose a conceptual framework for modelling pathways to ethical and safety harms of AI.
Our relational approach captures the complexity of risks and harms in sociotechnical AI systems.
arXiv Detail & Related papers (2024-01-25T11:47:06Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - A taxonomic system for failure cause analysis of open source AI
incidents [6.85316573653194]
This work demonstrates how to apply expert knowledge on the population of incidents in the AI Incident Database (AIID) to infer potential and likely technical causative factors that contribute to reported failures and harms.
We present early work on a taxonomic system that covers a cascade of interrelated incident factors, from system goals (nearly always known) to methods / technologies (knowable in many cases) and technical failure causes (subject to expert analysis) of the implicated systems.
arXiv Detail & Related papers (2022-11-14T11:21:30Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.