On Safety Assessment of Artificial Intelligence
- URL: http://arxiv.org/abs/2003.00260v1
- Date: Sat, 29 Feb 2020 14:05:28 GMT
- Title: On Safety Assessment of Artificial Intelligence
- Authors: Jens Braband and Hendrik Sch\"abe
- Abstract summary: We show that many models of artificial intelligence, in particular machine learning, are statistical models.
Part of the budget of dangerous random failures for the relevant safety integrity level needs to be used for the probabilistic faulty behavior of the AI system.
We propose a research challenge that may be decisive for the use of AI in safety related systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we discuss how systems with Artificial Intelligence (AI) can
undergo safety assessment. This is relevant, if AI is used in safety related
applications. Taking a deeper look into AI models, we show, that many models of
artificial intelligence, in particular machine learning, are statistical
models. Safety assessment would then have t o concentrate on the model that is
used in AI, besides the normal assessment procedure. Part of the budget of
dangerous random failures for the relevant safety integrity level needs to be
used for the probabilistic faulty behavior of the AI system. We demonstrate our
thoughts with a simple example and propose a research challenge that may be
decisive for the use of AI in safety related systems.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems [0.0]
AI has emerged as a key technology, driving advancements across a range of applications.
The challenge of assuring safety in systems that incorporate AI components is substantial.
We propose a novel methodology designed to support the creation of safety assurance cases for AI-based systems.
arXiv Detail & Related papers (2024-12-18T16:38:16Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - AI Safety Subproblems for Software Engineering Researchers [20.606264558332498]
We briefly summarize long-term AI Safety, and the challenge of avoiding harms from AI as systems meet or exceed human capabilities.
We make conjectures about how software might change with rising capabilities, and categorize "subproblems" which fit into traditional SE areas.
arXiv Detail & Related papers (2023-04-28T02:37:40Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Safe AI -- How is this Possible? [0.45687771576879593]
Traditional safety engineering is coming to a turning point moving from deterministic, non-evolving systems operating in well-defined contexts to increasingly autonomous and learning-enabled AI systems acting in largely unpredictable operating contexts.
We outline some of underlying challenges of safe AI and suggest a rigorous engineering framework for minimizing uncertainty, thereby increasing confidence, up to tolerable levels, in the safe behavior of AI systems.
arXiv Detail & Related papers (2022-01-25T16:32:35Z) - Understanding and Avoiding AI Failures: A Practical Guide [0.6526824510982799]
We create a framework for understanding the risks associated with AI applications.
We also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI.
arXiv Detail & Related papers (2021-04-22T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.