Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems
- URL: http://arxiv.org/abs/2412.14020v1
- Date: Wed, 18 Dec 2024 16:38:16 GMT
- Title: Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems
- Authors: Ronald Schnitzer, Lennart Kilian, Simon Roessner, Konstantinos Theodorou, Sonja Zillner,
- Abstract summary: AI has emerged as a key technology, driving advancements across a range of applications.
The challenge of assuring safety in systems that incorporate AI components is substantial.
We propose a novel methodology designed to support the creation of safety assurance cases for AI-based systems.
- Score: 0.0
- License:
- Abstract: Artificial Intelligence (AI) has emerged as a key technology, driving advancements across a range of applications. Its integration into modern autonomous systems requires assuring safety. However, the challenge of assuring safety in systems that incorporate AI components is substantial. The lack of concrete specifications, and also the complexity of both the operational environment and the system itself, leads to various aspects of uncertain behavior and complicates the derivation of convincing evidence for system safety. Nonetheless, scholars proposed to thoroughly analyze and mitigate AI-specific insufficiencies, so-called AI safety concerns, which yields essential evidence supporting a convincing assurance case. In this paper, we build upon this idea and propose the so-called Landscape of AI Safety Concerns, a novel methodology designed to support the creation of safety assurance cases for AI-based systems by systematically demonstrating the absence of AI safety concerns. The methodology's application is illustrated through a case study involving a driverless regional train, demonstrating its practicality and effectiveness.
Related papers
- AI Safety for Everyone [3.440579243843689]
Recent discussions and research in AI safety have increasingly emphasized the deep connection between AI safety and existential risk from advanced AI systems.
This framing may exclude researchers and practitioners who are committed to AI safety but approach the field from different angles.
We find a vast array of concrete safety work that addresses immediate and practical concerns with current AI systems.
arXiv Detail & Related papers (2025-02-13T13:04:59Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Trustworthy, Responsible, and Safe AI: A Comprehensive Architectural Framework for AI Safety with Challenges and Mitigations [15.946242944119385]
AI Safety is an emerging area of critical importance to the safe adoption and deployment of AI systems.
Our goal is to promote advancement in AI safety research, and ultimately enhance people's trust in digital transformation.
arXiv Detail & Related papers (2024-08-23T09:33:48Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Concrete Problems in AI Safety, Revisited [1.4089652912597792]
As AI systems proliferate in society, the AI community is increasingly preoccupied with the concept of AI Safety.
We demonstrate through an analysis of real world cases of such incidents that although current vocabulary captures a range of the encountered issues of AI deployment, an expanded socio-technical framing will be required.
arXiv Detail & Related papers (2023-12-18T23:38:05Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Safe AI -- How is this Possible? [0.45687771576879593]
Traditional safety engineering is coming to a turning point moving from deterministic, non-evolving systems operating in well-defined contexts to increasingly autonomous and learning-enabled AI systems acting in largely unpredictable operating contexts.
We outline some of underlying challenges of safe AI and suggest a rigorous engineering framework for minimizing uncertainty, thereby increasing confidence, up to tolerable levels, in the safe behavior of AI systems.
arXiv Detail & Related papers (2022-01-25T16:32:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.