When Should Algorithms Resign? A Proposal for AI Governance
- URL: http://arxiv.org/abs/2402.18326v2
- Date: Tue, 16 Jul 2024 19:40:37 GMT
- Title: When Should Algorithms Resign? A Proposal for AI Governance
- Authors: Umang Bhatt, Holli Sargeant,
- Abstract summary: Algorithmic resignation is a strategic approach for managing the use of artificial intelligence (AI) by embedding governance directly into AI systems.
It involves deliberate and informed disengagement from AI, such as restricting access AI outputs or displaying performance disclaimers.
- Score: 10.207523025324296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic resignation is a strategic approach for managing the use of artificial intelligence (AI) by embedding governance directly into AI systems. It involves deliberate and informed disengagement from AI, such as restricting access AI outputs or displaying performance disclaimers, in specific scenarios to aid the appropriate and effective use of AI. By integrating algorithmic resignation as a governance mechanism, organizations can better control when and how AI is used, balancing the benefits of automation with the need for human oversight.
Related papers
- The Decision Path to Control AI Risks Completely: Fundamental Control Mechanisms for AI Governance [1.1252728925416642]
Three of the AIMs must be built inside AI systems and three in society to address major areas of AI risks.<n>We discuss how to strengthen analog physical safeguards to prevent smarter AI/AGI/ASI from circumventing core safety controls.
arXiv Detail & Related papers (2025-12-04T05:53:41Z) - Development of management systems using artificial intelligence systems and machine learning methods for boards of directors (preprint, unofficial translation) [0.0]
The study addresses the paradigm shift in corporate management, where AI is moving from a decision support tool to an autonomous decision-maker.<n>A central problem identified is that the development of AI technologies is far outpacing the creation of adequate legal and ethical guidelines.<n>The research proposes a "reference model" for the development and implementation of autonomous AI systems in corporate management.
arXiv Detail & Related papers (2025-08-05T04:01:22Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Societal Adaptation to Advanced AI [1.2607853680700076]
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse.
We urge a complementary approach: increasing societal adaptation to advanced AI.
We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems.
arXiv Detail & Related papers (2024-05-16T17:52:12Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - LeanAI: A method for AEC practitioners to effectively plan AI
implementations [1.213096549055645]
Despite the enthusiasm regarding the use of AI, 85% of current big data projects fail.
One of the main reasons for AI project failures in the AEC industry is the disconnect between those who plan or decide to use AI and those who implement it.
This work introduces the LeanAI method, which delineates what AI should solve, what it can solve, and what it will solve.
arXiv Detail & Related papers (2023-06-29T09:18:11Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Putting AI Ethics into Practice: The Hourglass Model of Organizational
AI Governance [0.0]
We present an AI governance framework, which targets organizations that develop and use AI systems.
The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice.
arXiv Detail & Related papers (2022-06-01T08:55:27Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Time for AI (Ethics) Maturity Model Is Now [15.870654219935972]
This paper argues that AI software is still software and needs to be approached from the software development perspective.
We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system.
arXiv Detail & Related papers (2021-01-29T17:37:44Z) - AI Governance for Businesses [2.072259480917207]
It aims at leveraging AI through effective use of data and minimization of AI-related cost and risk.
This work views AI products as systems, where key functionality is delivered by machine learning (ML) models leveraging (training) data.
Our framework decomposes AI governance into governance of data, (ML) models and (AI) systems along four dimensions.
arXiv Detail & Related papers (2020-11-20T22:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.