Data-driven Design of Context-aware Monitors for Hazard Prediction in
Artificial Pancreas Systems
- URL: http://arxiv.org/abs/2104.02545v1
- Date: Tue, 6 Apr 2021 14:36:33 GMT
- Title: Data-driven Design of Context-aware Monitors for Hazard Prediction in
Artificial Pancreas Systems
- Authors: Xugui Zhou, Bulbul Ahmed, James H. Aylor, Philip Asare, Homa Alemzadeh
- Abstract summary: Medical Cyber-physical Systems (MCPS) are vulnerable to accidental or malicious faults that can target their controllers and cause safety hazards and harm to patients.
This paper proposes a combined model and data-driven approach for designing context-aware monitors that can detect early signs of hazards and mitigate them.
- Score: 2.126171264016785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical Cyber-physical Systems (MCPS) are vulnerable to accidental or
malicious faults that can target their controllers and cause safety hazards and
harm to patients. This paper proposes a combined model and data-driven approach
for designing context-aware monitors that can detect early signs of hazards and
mitigate them in MCPS. We present a framework for formal specification of
unsafe system context using Signal Temporal Logic (STL) combined with an
optimization method for patient-specific refinement of STL formulas based on
real or simulated faulty data from the closed-loop system for the generation of
monitor logic. We evaluate our approach in simulation using two
state-of-the-art closed-loop Artificial Pancreas Systems (APS). The results
show the context-aware monitor achieves up to 1.4 times increase in average
hazard prediction accuracy (F1-score) over several baseline monitors, reduces
false-positive and false-negative rates, and enables hazard mitigation with a
54% success rate while decreasing the average risk for patients.
Related papers
- Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - KnowSafe: Combined Knowledge and Data Driven Hazard Mitigation in
Artificial Pancreas Systems [3.146076597280736]
KnowSafe predicts and mitigates safety hazards resulting from safety-critical malicious attacks or accidental faults targeting a CPS controller.
We integrate domain-specific knowledge of safety constraints and context-specific mitigation actions with machine learning (ML) techniques.
KnowSafe outperforms the state-of-the-art by achieving higher accuracy in predicting system state trajectories and potential hazards.
arXiv Detail & Related papers (2023-11-13T16:43:34Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Monitoring machine learning (ML)-based risk prediction algorithms in the
presence of confounding medical interventions [4.893345190925178]
Performance monitoring of machine learning (ML)-based risk prediction models in healthcare is complicated by the issue of confounding medical interventions (CMI)
A simple approach is to ignore CMI and monitor only the untreated patients, whose outcomes remain unaltered.
We show that valid inference is still possible if one monitors conditional performance and if either conditional exchangeability or time-constant selection bias hold.
arXiv Detail & Related papers (2022-11-17T18:54:34Z) - Robustness Testing of Data and Knowledge Driven Anomaly Detection in
Cyber-Physical Systems [2.088376060651494]
This paper presents preliminary results on evaluating the robustness of ML-based anomaly detection methods in safety-critical CPS.
We test the hypothesis of whether integrating the domain knowledge (e.g., on unsafe system behavior) with the ML models can improve the robustness of anomaly detection without sacrificing accuracy and transparency.
arXiv Detail & Related papers (2022-04-20T02:02:56Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Sample-Efficient Safety Assurances using Conformal Prediction [57.92013073974406]
Early warning systems can provide alerts when an unsafe situation is imminent.
To reliably improve safety, these warning systems should have a provable false negative rate.
We present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics.
arXiv Detail & Related papers (2021-09-28T23:00:30Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Monitoring and Diagnosability of Perception Systems [21.25149064251918]
We propose a mathematical model for runtime monitoring and fault detection and identification in perception systems.
We demonstrate our monitoring system, dubbed PerSyS, in realistic simulations using the LGSVL self-driving simulator and the Apollo Auto autonomy software stack.
arXiv Detail & Related papers (2020-11-11T23:03:14Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z) - Assurance Monitoring of Cyber-Physical Systems with Machine Learning
Components [2.1320960069210484]
We investigate how to use the conformal prediction framework for assurance monitoring of Cyber-Physical Systems.
In order to handle high-dimensional inputs in real-time, we compute nonconformity scores using embedding representations of the learned models.
By leveraging conformal prediction, the approach provides well-calibrated confidence and can allow monitoring that ensures a bounded small error rate.
arXiv Detail & Related papers (2020-01-14T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.