DURA-CPS: A Multi-Role Orchestrator for Dependability Assurance in LLM-Enabled Cyber-Physical Systems
- URL: http://arxiv.org/abs/2506.06381v2
- Date: Fri, 13 Jun 2025 03:57:22 GMT
- Title: DURA-CPS: A Multi-Role Orchestrator for Dependability Assurance in LLM-Enabled Cyber-Physical Systems
- Authors: Trisanth Srinivasan, Santosh Patapati, Himani Musku, Idhant Gode, Aditya Arora, Samvit Bhattacharya, Abubakr Nazriev, Sanika Hirave, Zaryab Kanjiani, Srinjoy Ghose,
- Abstract summary: Cyber-Physical Systems (CPS) increasingly depend on advanced AI techniques to operate in critical applications.<n>Traditional verification and validation methods often struggle to handle the unpredictable and dynamic nature of AI components.<n>We introduce DURA-CPS, a novel framework that employs multi-role orchestration to automate the iterative assurance process for AI-powered CPS.
- Score: 2.118898809872991
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cyber-Physical Systems (CPS) increasingly depend on advanced AI techniques to operate in critical applications. However, traditional verification and validation methods often struggle to handle the unpredictable and dynamic nature of AI components. In this paper, we introduce DURA-CPS, a novel framework that employs multi-role orchestration to automate the iterative assurance process for AI-powered CPS. By assigning specialized roles (e.g., safety monitoring, security assessment, fault injection, and recovery planning) to dedicated agents within a simulated environment, DURA-CPS continuously evaluates and refines AI behavior against a range of dependability requirements. We demonstrate the framework through a case study involving an autonomous vehicle navigating an intersection with an AI-based planner. Our results show that DURA-CPS effectively detects vulnerabilities, manages performance impacts, and supports adaptive recovery strategies, thereby offering a structured and extensible solution for rigorous V&V in safety- and security-critical systems.
Related papers
- Taming Uncertainty via Automation: Observing, Analyzing, and Optimizing Agentic AI Systems [1.9751175705897066]
Large Language Models (LLMs) are increasingly deployed within agentic systems-collections of interacting, LLM-powered agents that execute complex, adaptive using memory, tools, and dynamic planning.<n>Traditional software observability and operations practices fall short in addressing these challenges.<n>This paper introduces AgentOps: a comprehensive framework for observing, analyzing, optimizing, and automating operation of agentic AI systems.
arXiv Detail & Related papers (2025-07-15T12:54:43Z) - Hierarchical Adversarially-Resilient Multi-Agent Reinforcement Learning for Cyber-Physical Systems Security [0.0]
This paper introduces a novel Hierarchical Adversarially-Resilient Multi-Agent Reinforcement Learning framework.<n>The framework incorporates an adversarial training loop designed to simulate and anticipate evolving cyber threats.
arXiv Detail & Related papers (2025-06-12T01:38:25Z) - Expert-in-the-Loop Systems with Cross-Domain and In-Domain Few-Shot Learning for Software Vulnerability Detection [38.083049237330826]
This study explores the use of Large Language Models (LLMs) in software vulnerability assessment by simulating the identification of Python code with known Common Weaknessions (CWEs)<n>Our results indicate that while zero-shot prompting performs poorly, few-shot prompting significantly enhances classification performance.<n> challenges such as model reliability, interpretability, and adversarial robustness remain critical areas for future research.
arXiv Detail & Related papers (2025-06-11T18:43:51Z) - A Large Language Model-Supported Threat Modeling Framework for Transportation Cyber-Physical Systems [11.872361272221244]
TraCR-TMF (Transportation Cybersecurity and Resiliency Threat Modeling Framework) is a large language model (LLM)-based framework that minimizes expert intervention.<n>It identifies threats, potential attack techniques, and corresponding countermeasures by leveraging the MITRE ATT&CK matrix.<n>TraCR-TMF also maps attack paths to critical assets by analyzing vulnerabilities using a customized LLM.
arXiv Detail & Related papers (2025-06-01T04:33:34Z) - MSDA: Combining Pseudo-labeling and Self-Supervision for Unsupervised Domain Adaptation in ASR [59.83547898874152]
We introduce a sample-efficient, two-stage adaptation approach that integrates self-supervised learning with semi-supervised techniques.<n>MSDA is designed to enhance the robustness and generalization of ASR models.<n>We demonstrate that Meta PL can be applied effectively to ASR tasks, achieving state-of-the-art results.
arXiv Detail & Related papers (2025-05-30T14:46:05Z) - Engineering Risk-Aware, Security-by-Design Frameworks for Assurance of Large-Scale Autonomous AI Models [0.0]
This paper presents an enterprise-level, risk-aware, security-by-design approach for large-scale autonomous AI systems.<n>We detail a unified pipeline that delivers provable guarantees of model behavior under adversarial and operational stress.<n>Case studies in national security, open-source model governance, and industrial automation demonstrate measurable reductions in vulnerability and compliance overhead.
arXiv Detail & Related papers (2025-05-09T20:14:53Z) - Designing Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigation [55.02966123945644]
We propose a hierarchical control framework leveraging neural network verification techniques to design control barrier functions (CBFs) and policy correction mechanisms.<n>Our approach relies on probabilistic enumeration to identify unsafe regions of operation, which are then used to construct a safe CBF-based control layer.<n>These experiments demonstrate the ability of the proposed solution to correct unsafe actions while preserving efficient navigation behavior.
arXiv Detail & Related papers (2025-04-30T13:47:25Z) - AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.<n> AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.<n>We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - A Survey of Anomaly Detection in Cyber-Physical Systems [1.2891210250935148]
This paper provides an overview of the different ways researchers have approached anomaly detection in CPS.<n>We categorize and compare methods like machine learning, deep learning, mathematical models, invariant, and hybrid techniques.<n>Our goal is to help readers understand the strengths and weaknesses of these methods and how they can be used to create safer, more reliable CPS.
arXiv Detail & Related papers (2025-02-18T19:38:18Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - MORTAR: A Model-based Runtime Action Repair Framework for AI-enabled Cyber-Physical Systems [21.693552236958983]
Cyber-Physical Systems (CPSs) are increasingly prevalent across various industrial and daily-life domains.
With recent advancements in artificial intelligence (AI), learning-based components, especially AI controllers, have become essential in enhancing the functionality and efficiency of CPSs.
The lack of interpretability in these AI controllers presents challenges to the safety and quality assurance of AI-enabled CPSs (AI-CPSs)
arXiv Detail & Related papers (2024-08-07T16:44:53Z) - Cooperative Cognitive Dynamic System in UAV Swarms: Reconfigurable Mechanism and Framework [80.39138462246034]
We propose the cooperative cognitive dynamic system (CCDS) to optimize the management for UAV swarms.
CCDS is a hierarchical and cooperative control structure that enables real-time data processing and decision.
In addition, CCDS can be integrated with the biomimetic mechanism to efficiently allocate tasks for UAV swarms.
arXiv Detail & Related papers (2024-05-18T12:45:00Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Constraints Satisfiability Driven Reinforcement Learning for Autonomous
Cyber Defense [7.321728608775741]
We present a new hybrid autonomous agent architecture that aims to optimize and verify defense policies of reinforcement learning (RL)
We use constraints verification (using satisfiability modulo theory (SMT)) to steer the RL decision-making toward safe and effective actions.
Our evaluation of the presented approach in a simulated CPS environment shows that the agent learns the optimal policy fast and defeats diversified attack strategies in 99% cases.
arXiv Detail & Related papers (2021-04-19T01:08:30Z) - An RL-Based Adaptive Detection Strategy to Secure Cyber-Physical Systems [0.0]
Increased dependence on software based control has escalated the vulnerabilities of Cyber Physical Systems.
We propose a Reinforcement Learning (RL) based framework which adaptively sets the parameters of such detectors based on experience learned from attack scenarios.
arXiv Detail & Related papers (2021-03-04T07:38:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.