Putting AI Ethics into Practice: The Hourglass Model of Organizational
AI Governance
- URL: http://arxiv.org/abs/2206.00335v2
- Date: Tue, 31 Jan 2023 10:32:13 GMT
- Title: Putting AI Ethics into Practice: The Hourglass Model of Organizational
AI Governance
- Authors: Matti M\"antym\"aki, Matti Minkkinen, Teemu Birkstedt, Mika Viljanen
- Abstract summary: We present an AI governance framework, which targets organizations that develop and use AI systems.
The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The organizational use of artificial intelligence (AI) has rapidly spread
across various sectors. Alongside the awareness of the benefits brought by AI,
there is a growing consensus on the necessity of tackling the risks and
potential harms, such as bias and discrimination, brought about by advanced AI
technologies. A multitude of AI ethics principles have been proposed to tackle
these risks, but the outlines of organizational processes and practices for
ensuring socially responsible AI development are in a nascent state. To address
the paucity of comprehensive governance models, we present an AI governance
framework, the hourglass model of organizational AI governance, which targets
organizations that develop and use AI systems. The framework is designed to
help organizations deploying AI systems translate ethical AI principles into
practice and align their AI systems and processes with the forthcoming European
AI Act. The hourglass framework includes governance requirements at the
environmental, organizational, and AI system levels. At the AI system level, we
connect governance requirements to AI system life cycles to ensure governance
throughout the system's life span. The governance model highlights the systemic
nature of AI governance and opens new research avenues into its practical
implementation, the mechanisms that connect different AI governance layers, and
the dynamics between the AI governance actors. The model also offers a starting
point for organizational decision-makers to consider the governance components
needed to ensure social acceptability, mitigate risks, and realize the
potential of AI.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry [0.0]
We discuss challenges that any organization attempting to operationalize AI governance will have to face.
These include questions concerning how to define the material scope of AI governance.
We hope to provide project managers, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks with general best practices.
arXiv Detail & Related papers (2024-07-07T12:01:42Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Responsible AI Pattern Catalogue: A Collection of Best Practices for AI
Governance and Engineering [20.644494592443245]
We present a Responsible AI Pattern Catalogue based on the results of a Multivocal Literature Review (MLR)
Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle.
arXiv Detail & Related papers (2022-09-12T00:09:08Z) - Governance of Ethical and Trustworthy AI Systems: Research Gaps in the
ECCOLA Method [5.28595286827031]
This research analyzes the ECCOLA method for developing ethical and trustworthy AI systems.
The results demonstrate that while ECCOLA fully facilitates AI governance in corporate governance practices in all its processes, some of its practices do not fully foster data governance and information governance practices.
arXiv Detail & Related papers (2021-11-11T13:54:31Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - AI Governance for Businesses [2.072259480917207]
It aims at leveraging AI through effective use of data and minimization of AI-related cost and risk.
This work views AI products as systems, where key functionality is delivered by machine learning (ML) models leveraging (training) data.
Our framework decomposes AI governance into governance of data, (ML) models and (AI) systems along four dimensions.
arXiv Detail & Related papers (2020-11-20T22:31:37Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.