Dislocated Accountabilities in the AI Supply Chain: Modularity and
Developers' Notions of Responsibility
- URL: http://arxiv.org/abs/2209.09780v3
- Date: Wed, 21 Jun 2023 02:24:00 GMT
- Title: Dislocated Accountabilities in the AI Supply Chain: Modularity and
Developers' Notions of Responsibility
- Authors: David Gray Widder and Dawn Nafus
- Abstract summary: We use Suchman's "located accountability" to show how responsible artificial intelligence labor is currently organized.
We argue that current responsible artificial intelligence interventions, like ethics checklists, could be improved by taking a located accountability approach.
- Score: 1.2691047660244335
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Responsible artificial intelligence guidelines ask engineers to consider how
their systems might harm. However, contemporary artificial intelligence systems
are built by composing many preexisting software modules that pass through many
hands before becoming a finished product or service. How does this shape
responsible artificial intelligence practice? In interviews with 27 artificial
intelligence engineers across industry, open source, and academia, our
participants often did not see the questions posed in responsible artificial
intelligence guidelines to be within their agency, capability, or
responsibility to address. We use Suchman's "located accountability" to show
how responsible artificial intelligence labor is currently organized and to
explore how it could be done differently. We identify cross-cutting social
logics, like modularizability, scale, reputation, and customer orientation,
that organize which responsible artificial intelligence actions do take place
and which are relegated to low status staff or believed to be the work of the
next or previous person in the imagined "supply chain." We argue that current
responsible artificial intelligence interventions, like ethics checklists and
guidelines that assume panoptical knowledge and control over systems, could be
improved by taking a located accountability approach, recognizing where
relations and obligations might intertwine inside and outside of this supply
chain.
Related papers
- A Comprehensive Review of AI Agents: Transforming Possibilities in Technology and Beyond [3.96715377510494]
Review aims to guide the next generation of AI agent systems toward more robust, adaptable, and trustworthy autonomous intelligence.<n>We synthesize insights from cognitive science-inspired models, hierarchical reinforcement learning frameworks, and large language model-based reasoning.<n>We discuss the pressing ethical, safety, and interpretability concerns associated with deploying these agents in real-world scenarios.
arXiv Detail & Related papers (2025-08-16T07:38:45Z) - FAIRTOPIA: Envisioning Multi-Agent Guardianship for Disrupting Unfair AI Pipelines [1.556153237434314]
AI models have become active decision makers, often acting without human supervision.<n>We envision agents as fairness guardians, since agents learn from their environment.<n>We introduce a fairness-by-design approach which embeds multi-role agents in an end-to-end (human to AI) synergetic scheme.
arXiv Detail & Related papers (2025-06-10T17:02:43Z) - Reflexive Prompt Engineering: A Framework for Responsible Prompt Engineering and Interaction Design [0.0]
Article examines how strategic prompt engineering can embed ethical and legal considerations directly into AI interactions.
It proposes a framework for responsible prompt engineering that encompasses five interconnected components.
The analysis reveals that effective prompt engineering requires a delicate balance between technical precision and ethical consciousness.
arXiv Detail & Related papers (2025-04-22T18:51:32Z) - Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems [132.77459963706437]
This book provides a comprehensive overview, framing intelligent agents within modular, brain-inspired architectures.<n>It explores self-enhancement and adaptive evolution mechanisms, exploring how agents autonomously refine their capabilities.<n>It also examines the collective intelligence emerging from agent interactions, cooperation, and societal structures.
arXiv Detail & Related papers (2025-03-31T18:00:29Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.
The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - Governing AI Agents [0.2913760942403036]
Article looks at the economic theory of principal-agent problems and the common law doctrine of agency relationships.
It identifies problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty.
It argues that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
arXiv Detail & Related papers (2025-01-14T07:55:18Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Unraveling the Nuances of AI Accountability: A Synthesis of Dimensions Across Disciplines [0.0]
We review current research across multiple disciplines and identify key dimensions of accountability in the context of AI.
We reveal six themes with 13 corresponding dimensions and additional accountability facilitators.
arXiv Detail & Related papers (2024-10-05T18:08:39Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Explainable Goal-Driven Agents and Robots -- A Comprehensive Review [13.94373363822037]
The paper reviews approaches on explainable goal-driven intelligent agents and robots.
It focuses on techniques for explaining and communicating agents perceptual functions and cognitive reasoning.
It suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.
arXiv Detail & Related papers (2020-04-21T01:41:20Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Trustworthy AI in the Age of Pervasive Computing and Big Data [22.92621391190282]
We formalise the requirements of trustworthy AI systems through an ethics perspective.
After discussing the state of research and the remaining challenges, we show how a concrete use-case in smart cities can benefit from these methods.
arXiv Detail & Related papers (2020-01-30T08:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.