How to Stop Playing Whack-a-Mole: Mapping the Ecosystem of Technologies Facilitating AI-Generated Non-Consensual Intimate Images
- URL: http://arxiv.org/abs/2602.04759v1
- Date: Wed, 04 Feb 2026 16:58:05 GMT
- Title: How to Stop Playing Whack-a-Mole: Mapping the Ecosystem of Technologies Facilitating AI-Generated Non-Consensual Intimate Images
- Authors: Michelle L. Ding, Harini Suresh, Suresh Venkatasubramanian,
- Abstract summary: AIG-NCII is a form of image-based sexual abuse that disproportionately harms women and girls.<n>There is a patchwork of commendable efforts across industry, policy, academia, and civil society to address AIG-NCII.<n>We contribute the first comprehensive AIG-NCII technological ecosystem that maps and taxonomizes 11 categories of technologies.
- Score: 2.9855784955026805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The last decade has witnessed a rapid advancement of generative AI technology that significantly scaled the accessibility of AI-generated non-consensual intimate images (AIG-NCII), a form of image-based sexual abuse that disproportionately harms women and girls. There is a patchwork of commendable efforts across industry, policy, academia, and civil society to address AIG-NCII. However, these efforts lack a shared, consistent mental model that situates the technologies they target within the context of a large, interconnected, and ever-evolving technological ecosystem. As a result, interventions remain siloed and are difficult to evaluate and compare, leading to a reactive cycle of whack-a-mole. We contribute the first comprehensive AIG-NCII technological ecosystem that maps and taxonomizes 11 categories of technologies facilitating the creation, distribution, proliferation and discovery, infrastructural support, and monetization of AIG-NCII. First, we build and visualize the ecosystem through a synthesis of over a hundred primary sources from researchers, journalists, advocates, policymakers, and technologists. Next, we demonstrate how stakeholders can use the ecosystem as a tool to 1) understand new incidents of harm via a case study of Grok and 2) evaluate existing interventions via three more case studies. We conclude with three actionable recommendations, namely that stakeholders should 1) use the ecosystem to map out state, federal, and international laws to produce a clearer policy landscape, 2) collectively develop a database that dynamically tracks the 11 technologies in the ecosystem to better evaluate interventions, and 3) adopt a relational approach to researching AIG-NCII to better understand how the ecosystem technologies interact.
Related papers
- Information Access of the Oppressed: A Problem-Posing Framework for Envisioning Emancipatory Information Access Platforms [5.801539233803859]
Online information access platforms are targets of authoritarian capture.<n>We explore this question through the lens of Paulo Freire's theories of emancipatory pedagogy.
arXiv Detail & Related papers (2026-01-14T16:15:26Z) - AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Responsible Data Stewardship: Generative AI and the Digital Waste Problem [0.0]
generative AI systems enable unprecedented creation levels of synthetic data across text, images, audio, and video modalities.<n>Digital waste refers to stored data that consumes resources without serving a specific (and/or immediate) purpose.<n>This paper introduces digital waste as an ethical imperative within (generative) AI development, positioning environmental sustainability as core for responsible innovation.
arXiv Detail & Related papers (2025-05-27T20:07:22Z) - Reality Check: A New Evaluation Ecosystem Is Necessary to Understand AI's Real World Effects [3.1402583853710433]
The paper argues that measuring the indirect and secondary effects of AI will require expansion beyond static, single-turn approaches conducted in silico.<n>We describe the need for data and methods that can facilitate contextual awareness and enable downstream interpretation and decision making about AI's secondary effects.
arXiv Detail & Related papers (2025-05-24T22:35:32Z) - Open and Sustainable AI: challenges, opportunities and the road ahead in the life sciences (October 2025 -- Version 2) [49.142289900583705]
We review the increased erosion of trust in AI research outputs, driven by the issues of poor reusability.<n>We discuss the fragmented components of the AI ecosystem and lack of guiding pathways to best support Open and Sustainable AI.<n>Our work connects researchers with relevant AI resources, facilitating the implementation of sustainable, reusable and transparent AI.
arXiv Detail & Related papers (2025-05-22T12:52:34Z) - Data Ecofeminism [0.0]
Generative Artificial Intelligence (GenAI) is driving significant environmental impacts.<n>The paper calls for an urgent reassessment of the GenAI innovation race.
arXiv Detail & Related papers (2025-02-16T11:47:50Z) - On the Opportunities of Green Computing: A Survey [80.21955522431168]
Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades.
The needs for high computing power brings higher carbon emission and undermines research fairness.
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
arXiv Detail & Related papers (2023-11-01T11:16:41Z) - A Survey on ChatGPT: AI-Generated Contents, Challenges, and Solutions [19.50785795365068]
AIGC uses generative large AI algorithms to assist humans in creating massive, high-quality, and human-like content at a faster pace and lower cost.
This paper presents an in-depth survey of working principles, security and privacy threats, state-of-the-art solutions, and future challenges of the AIGC paradigm.
arXiv Detail & Related papers (2023-05-25T15:09:11Z) - Empowering Local Communities Using Artificial Intelligence [70.17085406202368]
It has become an important topic to explore the impact of AI on society from a people-centered perspective.
Previous works in citizen science have identified methods of using AI to engage the public in research.
This article discusses the challenges of applying AI in Community Citizen Science.
arXiv Detail & Related papers (2021-10-05T12:51:11Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.