Quantitative AI Risk Assessments: Opportunities and Challenges
- URL: http://arxiv.org/abs/2209.06317v1
- Date: Tue, 13 Sep 2022 21:47:25 GMT
- Title: Quantitative AI Risk Assessments: Opportunities and Challenges
- Authors: David Piorkowski, Michael Hind, John Richards
- Abstract summary: AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society.
Risks have led to proposed regulations, litigation, and general societal concerns.
This paper explores the concept of a quantitative AI Risk Assessment.
- Score: 9.262092738841979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although AI-based systems are increasingly being leveraged to provide value
to organizations, individuals, and society, significant attendant risks have
been identified. These risks have led to proposed regulations, litigation, and
general societal concerns.
As with any promising technology, organizations want to benefit from the
positive capabilities of AI technology while reducing the risks. The best way
to reduce risks is to implement comprehensive AI lifecycle governance where
policies and procedures are described and enforced during the design,
development, deployment, and monitoring of an AI system. While support for
comprehensive governance is beginning to emerge, organizations often need to
identify the risks of deploying an already-built model without knowledge of how
it was constructed or access to its original developers.
Such an assessment will quantitatively assess the risks of an existing model
in a manner analogous to how a home inspector might assess the energy
efficiency of an already-built home or a physician might assess overall patient
health based on a battery of tests. This paper explores the concept of a
quantitative AI Risk Assessment, exploring the opportunities, challenges, and
potential impacts of such an approach, and discussing how it might improve AI
regulations.
Related papers
- Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) [0.0]
Authors argue that ethical concerns about AI deployment vary significantly based on implementation context and specific use cases.
They propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing.
arXiv Detail & Related papers (2025-01-20T19:38:21Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - AI and the Iterable Epistopics of Risk [1.26404863283601]
The risks AI presents to society are broadly understood to be manageable through general calculus.
This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert.
arXiv Detail & Related papers (2024-04-29T13:33:22Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z) - A Brief Overview of AI Governance for Responsible Machine Learning
Systems [3.222802562733787]
This position paper seeks to present a brief introduction to AI governance, which is a framework designed to oversee the responsible use of AI.
Due to the probabilistic nature of AI, the risks associated with it are far greater than traditional technologies.
arXiv Detail & Related papers (2022-11-21T23:48:51Z) - Actionable Guidance for High-Consequence AI Risk Management: Towards
Standards Addressing AI Catastrophic Risks [12.927021288925099]
Artificial intelligence (AI) systems can present risks of events with very high or catastrophic consequences at societal scale.
NIST is developing the NIST Artificial Intelligence Risk Management Framework (AI RMF) as voluntary guidance on AI risk assessment and management.
We provide detailed actionable-guidance recommendations focused on identifying and managing risks of events with very high or catastrophic consequences.
arXiv Detail & Related papers (2022-06-17T18:40:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.