A Framework for Understanding AI-Induced Field Change: How AI
Technologies are Legitimized and Institutionalized
- URL: http://arxiv.org/abs/2108.07804v1
- Date: Wed, 18 Aug 2021 14:06:08 GMT
- Title: A Framework for Understanding AI-Induced Field Change: How AI
Technologies are Legitimized and Institutionalized
- Authors: Benjamin Cedric Larsen
- Abstract summary: This paper presents a conceptual framework to analyze and understand AI-induced field-change.
The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions.
The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) systems operate in increasingly diverse areas,
from healthcare to facial recognition, the stock market, autonomous vehicles,
and so on. While the underlying digital infrastructure of AI systems is
developing rapidly, each area of implementation is subject to different degrees
and processes of legitimization. By combining elements from institutional
theory and information systems-theory, this paper presents a conceptual
framework to analyze and understand AI-induced field-change. The introduction
of novel AI-agents into new or existing fields creates a dynamic in which
algorithms (re)shape organizations and institutions while existing
institutional infrastructures determine the scope and speed at which
organizational change is allowed to occur. Where institutional infrastructure
and governance arrangements, such as standards, rules, and regulations, still
are unelaborate, the field can move fast but is also more likely to be
contested. The institutional infrastructure surrounding AI-induced fields is
generally little elaborated, which could be an obstacle to the broader
institutionalization of AI-systems going forward.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Navigating the sociotechnical labyrinth: Dynamic certification for responsible embodied AI [19.959138971887395]
We argue that sociotechnical requirements shape the governance of artificially intelligent (AI) systems.
Our proposed transdisciplinary approach is designed to ensure the safe, ethical, and practical deployment of AI systems.
arXiv Detail & Related papers (2024-08-16T08:35:26Z) - Artificial intelligence in government: Concepts, standards, and a
unified framework [0.0]
Recent advances in artificial intelligence (AI) hold the promise of transforming government.
It is critical that new AI systems behave in alignment with the normative expectations of society.
arXiv Detail & Related papers (2022-10-31T10:57:20Z) - Putting AI Ethics into Practice: The Hourglass Model of Organizational
AI Governance [0.0]
We present an AI governance framework, which targets organizations that develop and use AI systems.
The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice.
arXiv Detail & Related papers (2022-06-01T08:55:27Z) - Introduction to the Artificial Intelligence that can be applied to the
Network Automation Journey [68.8204255655161]
The "Intent-Based Networking - Concepts and Definitions" document describes the different parts of the ecosystem that could be involved in NetDevOps.
The recognize, generate intent, translate and refine features need a new way to implement algorithms.
arXiv Detail & Related papers (2022-04-02T08:12:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - AI Governance for Businesses [2.072259480917207]
It aims at leveraging AI through effective use of data and minimization of AI-related cost and risk.
This work views AI products as systems, where key functionality is delivered by machine learning (ML) models leveraging (training) data.
Our framework decomposes AI governance into governance of data, (ML) models and (AI) systems along four dimensions.
arXiv Detail & Related papers (2020-11-20T22:31:37Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.