System Safety and Artificial Intelligence
- URL: http://arxiv.org/abs/2202.09292v1
- Date: Fri, 18 Feb 2022 16:37:54 GMT
- Title: System Safety and Artificial Intelligence
- Authors: Roel I.J. Dobbe
- Abstract summary: New applications of AI across societal domains come with new hazards.
The field of system safety has dealt with accidents and harm in safety-critical systems.
This chapter honors system safety pioneer Nancy Leveson.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This chapter formulates seven lessons for preventing harm in artificial
intelligence (AI) systems based on insights from the field of system safety for
software-based automation in safety-critical domains. New applications of AI
across societal domains and public organizations and infrastructures come with
new hazards, which lead to new forms of harm, both grave and pernicious. The
text addresses the lack of consensus for diagnosing and eliminating new AI
system hazards. For decades, the field of system safety has dealt with
accidents and harm in safety-critical systems governed by varying degrees of
software-based automation and decision-making. This field embraces the core
assumption of systems and control that AI systems cannot be safeguarded by
technical design choices on the model or algorithm alone, instead requiring an
end-to-end hazard analysis and design frame that includes the context of use,
impacted stakeholders and the formal and informal institutional environment in
which the system operates. Safety and other values are then inherently
socio-technical and emergent system properties that require design and control
measures to instantiate these across the technical, social and institutional
components of a system. This chapter honors system safety pioneer Nancy
Leveson, by situating her core lessons for today's AI system safety challenges.
For every lesson, concrete tools are offered for rethinking and reorganizing
the safety management of AI systems, both in design and governance. This
history tells us that effective AI safety management requires transdisciplinary
approaches and a shared language that allows involvement of all levels of
society.
Related papers
- From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems [2.226040060318401]
We translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes.
We focus on systems that rely on machine learning algorithms and conductedA on three case studies.
We find that key concepts and steps of conducting anA readily apply, albeit with a few adaptations tailored for AI systems.
arXiv Detail & Related papers (2024-10-29T20:43:18Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - A Cybersecurity Risk Analysis Framework for Systems with Artificial
Intelligence Components [0.0]
The introduction of the European Union Artificial Intelligence Act, the NIST Artificial Intelligence Risk Management Framework, and related norms demands a better understanding and implementation of novel risk analysis approaches to evaluate systems with Artificial Intelligence components.
This paper provides a cybersecurity risk analysis framework that can help assessing such systems.
arXiv Detail & Related papers (2024-01-03T09:06:39Z) - Security Challenges in Autonomous Systems Design [1.864621482724548]
With the independence from human control, cybersecurity of such systems becomes even more critical.
With the independence from human control, cybersecurity of such systems becomes even more critical.
This paper thoroughly discusses the state of the art, identifies emerging security challenges and proposes research directions.
arXiv Detail & Related papers (2023-11-05T09:17:39Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Deep Learning Safety Concerns in Automated Driving Perception [43.026485214492105]
This paper introduces an additional categorization for a better understanding as well as enabling cross-functional teams to jointly address the concerns.
Recent advances in the field of deep learning and impressive performance of deep neural networks (DNNs) for perception have resulted in an increased demand for their use in automated driving (AD) systems.
arXiv Detail & Related papers (2023-09-07T15:25:47Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.