Evolving AI Risk Management: A Maturity Model based on the NIST AI Risk
Management Framework
- URL: http://arxiv.org/abs/2401.15229v2
- Date: Tue, 13 Feb 2024 17:41:42 GMT
- Title: Evolving AI Risk Management: A Maturity Model based on the NIST AI Risk
Management Framework
- Authors: Ravit Dotan, Borhane Blili-Hamelin, Ravi Madhavan, Jeanna Matthews,
Joshua Scarpino
- Abstract summary: Researchers, government bodies, and organizations have been calling for a shift in the responsible AI community.
We provide a framework for evaluating where organizations sit relative to the emerging consensus on sociotechnical harm mitigation best practices.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Researchers, government bodies, and organizations have been repeatedly
calling for a shift in the responsible AI community from general principles to
tangible and operationalizable practices in mitigating the potential
sociotechnical harms of AI. Frameworks like the NIST AI RMF embody an emerging
consensus on recommended practices in operationalizing sociotechnical harm
mitigation. However, private sector organizations currently lag far behind this
emerging consensus. Implementation is sporadic and selective at best. At worst,
it is ineffective and can risk serving as a misleading veneer of trustworthy
processes, providing an appearance of legitimacy to substantively harmful
practices. In this paper, we provide a foundation for a framework for
evaluating where organizations sit relative to the emerging consensus on
sociotechnical harm mitigation best practices: a flexible maturity model based
on the NIST AI RMF.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety [0.0]
Operationalizing AI ethics and safety principles and frameworks is essential to realizing potential benefits and mitigating potential harms caused by AI systems.
We argue that taxonomy must also be transferred into local categories to be readily implemented in sector-specific AI safety operationalization efforts.
Drawing from emerging anthropological theories of human rights, we propose that the process of "vernacularization" can help bridge this gap.
arXiv Detail & Related papers (2024-10-21T22:47:48Z) - Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure [4.578401882034969]
We focus on how model performance evaluation may inform or inhibit probing of model limitations, biases, and other risks.
Our findings can inform AI providers and legal scholars in designing interventions and policies that preserve open-source innovation while incentivizing ethical uptake.
arXiv Detail & Related papers (2024-09-27T19:09:40Z) - Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications [0.0]
This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable.
Different case studies validate this framework by integrating AI in both academic and practical environments.
arXiv Detail & Related papers (2024-09-25T12:39:28Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare [1.8964739087256175]
The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
arXiv Detail & Related papers (2023-05-23T16:04:59Z) - Three lines of defense against risks from AI [0.0]
It is not always clear who is responsible for AI risk management.
The Three Lines of Defense (3LoD) model is considered best practice in many industries.
I suggest ways in which AI companies could implement the model.
arXiv Detail & Related papers (2022-12-16T09:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.