AI and the Iterable Epistopics of Risk
- URL: http://arxiv.org/abs/2407.10236v2
- Date: Tue, 16 Jul 2024 08:43:18 GMT
- Title: AI and the Iterable Epistopics of Risk
- Authors: Andy Crabtree, Glenn McGarry, Lachlan Urquhart,
- Abstract summary: The risks AI presents to society are broadly understood to be manageable through general calculus.
This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert.
- Score: 1.26404863283601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Abstract. The risks AI presents to society are broadly understood to be manageable through general calculus, i.e., general frameworks designed to enable those involved in the development of AI to apprehend and manage risk, such as AI impact assessments, ethical frameworks, emerging international standards, and regulations. This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert. It reveals that risk and risk management is dependent on mundane situated practices not encapsulated in general calculus. Situated practice surfaces iterable epistopics, revealing how those involved in the development of AI know and subsequently respond to risk and uncover major challenges in their work. The ongoing discovery and elaboration of epistopics of risk in AI a) furnishes a potential program of interdisciplinary inquiry, b) provides AI developers with a means of apprehending risk, and c) informs the ongoing evolution of general calculus.
Related papers
- Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - QB4AIRA: A Question Bank for AI Risk Assessment [19.783485414942284]
QB4AIRA comprises 293 prioritized questions covering a wide range of AI risk areas.
It serves as a valuable resource for stakeholders in assessing and managing AI risks.
arXiv Detail & Related papers (2023-05-16T09:18:44Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [7.35411010153049]
Best way to reduce risks is to implement comprehensive AI lifecycle governance.
Risks can be quantified using metrics from the technical community.
This paper explores these issues, focusing on the opportunities, challenges, and potential impacts of such an approach.
arXiv Detail & Related papers (2022-09-13T21:47:25Z) - Actionable Guidance for High-Consequence AI Risk Management: Towards
Standards Addressing AI Catastrophic Risks [12.927021288925099]
Artificial intelligence (AI) systems can present risks of events with very high or catastrophic consequences at societal scale.
NIST is developing the NIST Artificial Intelligence Risk Management Framework (AI RMF) as voluntary guidance on AI risk assessment and management.
We provide detailed actionable-guidance recommendations focused on identifying and managing risks of events with very high or catastrophic consequences.
arXiv Detail & Related papers (2022-06-17T18:40:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.