A Conceptual Framework for AI-based Decision Systems in Critical Infrastructures
- URL: http://arxiv.org/abs/2504.16133v1
- Date: Mon, 21 Apr 2025 18:38:26 GMT
- Title: A Conceptual Framework for AI-based Decision Systems in Critical Infrastructures
- Authors: Milad Leyli-abadi, Ricardo J. Bessa, Jan Viebahn, Daniel Boos, Clark Borst, Alberto Castagna, Ricardo Chavarriaga, Mohamed Hassouna, Bruno Lemetayer, Giulia Leto, Antoine Marot, Maroua Meddeb, Manuel Meyer, Viola Schiaffonati, Manuel Schneider, Toni Waefler,
- Abstract summary: This paper proposes a holistic conceptual framework for critical infrastructures by adopting an interdisciplinary approach.<n>It integrates traditionally distinct fields such as mathematics, decision theory, computer science, philosophy, psychology, and cognitive engineering.<n>It draws on specialized engineering domains, particularly energy, mobility, and aeronautics.
- Score: 2.272797989529731
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The interaction between humans and AI in safety-critical systems presents a unique set of challenges that remain partially addressed by existing frameworks. These challenges stem from the complex interplay of requirements for transparency, trust, and explainability, coupled with the necessity for robust and safe decision-making. A framework that holistically integrates human and AI capabilities while addressing these concerns is notably required, bridging the critical gaps in designing, deploying, and maintaining safe and effective systems. This paper proposes a holistic conceptual framework for critical infrastructures by adopting an interdisciplinary approach. It integrates traditionally distinct fields such as mathematics, decision theory, computer science, philosophy, psychology, and cognitive engineering and draws on specialized engineering domains, particularly energy, mobility, and aeronautics. The flexibility in its adoption is also demonstrated through its instantiation on an already existing framework.
Related papers
- SYMBIOSIS: Systems Thinking and Machine Intelligence for Better Outcomes in Society [0.0]
SYMBIOSIS is an AI-powered framework and platform designed to make Systems Thinking accessible for addressing societal challenges.
To address this, we developed a generative co-pilot that translates complex systems representations into natural language.
SYMBIOSIS aims to serve as a foundational step to unlock future research into responsible and society-centered AI.
arXiv Detail & Related papers (2025-03-07T17:07:26Z) - Towards Developing Ethical Reasoners: Integrating Probabilistic Reasoning and Decision-Making for Complex AI Systems [4.854297874710511]
A computational ethics framework is essential for AI and autonomous systems operating in complex, real-world environments.<n>Existing approaches often lack the adaptability needed to integrate ethical principles into dynamic and ambiguous contexts.<n>We outline the necessary ingredients for building a holistic, meta-level framework that combines intermediate representations, probabilistic reasoning, and knowledge representation.
arXiv Detail & Related papers (2025-02-28T17:25:11Z) - Ten Challenging Problems in Federated Foundation Models [55.343738234307544]
Federated Foundation Models (FedFMs) represent a distributed learning paradigm that fuses general competences of foundation models as well as privacy-preserving capabilities of federated learning.
This paper provides a comprehensive summary of the ten challenging problems inherent in FedFMs, encompassing foundational theory, utilization of private data, continual learning, unlearning, Non-IID and graph data, bidirectional knowledge transfer, incentive mechanism design, game mechanism design, model watermarking, and efficiency.
arXiv Detail & Related papers (2025-02-14T04:01:15Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - TOAST Framework: A Multidimensional Approach to Ethical and Sustainable AI Integration in Organizations [0.38073142980732994]
This paper introduces the Trustworthy, Optimized, Adaptable, and Socio-Technologically harmonious (TOAST) framework.<n>It focuses on reliability, accountability, technical advancement, adaptability, and socio-technical harmony.<n>By grounding the TOAST framework in healthcare case studies, this paper provides a robust evaluation of its practicality and theoretical soundness.
arXiv Detail & Related papers (2025-01-07T05:13:39Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Privacy in Foundation Models: A Conceptual Framework for System Design [3.438211531047665]
Foundation models present both significant challenges and incredible opportunities.<n>There is currently a lack of consensus regarding the comprehensive scope of both technical and non-technical issues that the privacy evaluation process should encompass.<n>This paper introduces a novel conceptual framework that integrates various responsible AI patterns from multiple perspectives.
arXiv Detail & Related papers (2023-11-13T00:44:06Z) - Systems Challenges for Trustworthy Embodied Systems [0.0]
A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed.
It is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction.
We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in
arXiv Detail & Related papers (2022-01-10T15:52:17Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.