Investigating Robustness in Cyber-Physical Systems: Specification-Centric Analysis in the face of System Deviations
- URL: http://arxiv.org/abs/2311.07462v2
- Date: Mon, 25 Mar 2024 19:02:47 GMT
- Title: Investigating Robustness in Cyber-Physical Systems: Specification-Centric Analysis in the face of System Deviations
- Authors: Changjian Zhang, Parv Kapoor, Romulo Meira-Goes, David Garlan, Eunsuk Kang, Akila Ganlath, Shatadal Mishra, Nejib Ammar,
- Abstract summary: A critical attribute of cyber-physical systems (CPS) is robustness, denoting its capacity to operate safely.
This paper proposes a novel specification-based robustness, which characterizes the effectiveness of a controller in meeting a specified system requirement.
We present an innovative two-layer simulation-based analysis framework designed to identify subtle robustness violations.
- Score: 8.8690305802668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The adoption of cyber-physical systems (CPS) is on the rise in complex physical environments, encompassing domains such as autonomous vehicles, the Internet of Things (IoT), and smart cities. A critical attribute of CPS is robustness, denoting its capacity to operate safely despite potential disruptions and uncertainties in the operating environment. This paper proposes a novel specification-based robustness, which characterizes the effectiveness of a controller in meeting a specified system requirement, articulated through Signal Temporal Logic (STL) while accounting for possible deviations in the system. This paper also proposes the robustness falsification problem based on the definition, which involves identifying minor deviations capable of violating the specified requirement. We present an innovative two-layer simulation-based analysis framework designed to identify subtle robustness violations. To assess our methodology, we devise a series of benchmark problems wherein system parameters can be adjusted to emulate various forms of uncertainties and disturbances. Initial evaluations indicate that our falsification approach proficiently identifies robustness violations, providing valuable insights for comparing robustness between conventional and reinforcement learning (RL)-based controllers
Related papers
- Hybrid Temporal Differential Consistency Autoencoder for Efficient and Sustainable Anomaly Detection in Cyber-Physical Systems [0.0]
Cyberattacks on critical infrastructure, particularly water distribution systems, have increased due to rapid digitalization.
This study addresses key challenges in anomaly detection by leveraging time correlations in sensor data.
We propose a hybrid autoencoder-based approach, referred to as hybrid TDC-AE, which extends TDC by incorporating both deterministic nodes and conventional statistical nodes.
arXiv Detail & Related papers (2025-04-08T09:22:44Z) - Quantifying Robustness: A Benchmarking Framework for Deep Learning Forecasting in Cyber-Physical Systems [44.61435605872856]
We introduce a practical robustness definition grounded in distributional robustness, explicitly tailored to industrial CPS.
Our framework simulates realistic disturbances, such as sensor drift, noise and irregular sampling, enabling thorough robustness analyses of forecasting models.
arXiv Detail & Related papers (2025-04-04T14:50:48Z) - Anomaly Detection in Complex Dynamical Systems: A Systematic Framework Using Embedding Theory and Physics-Inspired Consistency [0.0]
Anomaly detection in complex dynamical systems is essential for ensuring reliability, safety, and efficiency in industrial and cyber-physical infrastructures.
We propose a system-theoretic approach to anomaly detection, grounded in classical embedding theory and physics-inspired consistency principles.
Our findings support the hypothesis that anomalies disrupt stable system dynamics, providing a robust, interpretable signal for anomaly detection.
arXiv Detail & Related papers (2025-02-26T17:06:13Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Tolerance of Reinforcement Learning Controllers against Deviations in Cyber Physical Systems [8.869030580266799]
We introduce a new, expressive notion of tolerance that describes how well a controller is capable of satisfying a desired system requirement.
We propose a novel analysis problem, called the tolerance falsification problem, which involves finding small deviations that result in a violation of the given requirement.
We present a novel, two-layer simulation-based analysis framework and a novel search for finding small tolerance violations.
arXiv Detail & Related papers (2024-06-24T18:33:45Z) - Dynamic Vulnerability Criticality Calculator for Industrial Control Systems [0.0]
This paper introduces an innovative approach by proposing a dynamic vulnerability criticality calculator.
Our methodology encompasses the analysis of environmental topology and the effectiveness of deployed security mechanisms.
Our approach integrates these factors into a comprehensive Fuzzy Cognitive Map model, incorporating attack paths to holistically assess the overall vulnerability score.
arXiv Detail & Related papers (2024-03-20T09:48:47Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Algorithmic Robustness [18.406992961818368]
Robustness is an important enabler of other goals that are frequently cited in the context of public policy decisions about computational systems.
This document provides a brief roadmap to some of the concepts and existing research around the idea of algorithmic robustness.
arXiv Detail & Related papers (2023-10-17T17:51:12Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Robust Policy Learning over Multiple Uncertainty Sets [91.67120465453179]
Reinforcement learning (RL) agents need to be robust to variations in safety-critical environments.
We develop an algorithm that enjoys the benefits of both system identification and robust RL.
arXiv Detail & Related papers (2022-02-14T20:06:28Z) - Adversarially Robust Stability Certificates can be Sample-Efficient [14.658040519472646]
We consider learning adversarially robust stability certificates for unknown nonlinear dynamical systems.
We show that the statistical cost of learning an adversarial stability certificate is equivalent, up to constant factors, to that of learning a nominal stability certificate.
arXiv Detail & Related papers (2021-12-20T17:23:31Z) - Inter-Domain Fusion for Enhanced Intrusion Detection in Power Systems:
An Evidence Theoretic and Meta-Heuristic Approach [0.0]
False alerts due to/ compromised IDS in ICS networks can lead to severe economic and operational damage.
This work presents an approach for reducing false alerts in CPS power systems by dealing with uncertainty without prior distribution of alerts.
arXiv Detail & Related papers (2021-11-20T00:05:39Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.