Towards Developing Safety Assurance Cases for Learning-Enabled Medical
Cyber-Physical Systems
- URL: http://arxiv.org/abs/2211.15413v1
- Date: Wed, 23 Nov 2022 22:43:48 GMT
- Title: Towards Developing Safety Assurance Cases for Learning-Enabled Medical
Cyber-Physical Systems
- Authors: Maryam Bagheri, Josephine Lamp, Xugui Zhou, Lu Feng, Homa Alemzadeh
- Abstract summary: We develop a safety assurance case for Machine Learning controllers in learning-enabled MCPS.
We provide a detailed analysis by implementing a deep neural network for the prediction in Artificial Pancreas Systems.
We check the sufficiency of the ML data and analyze the correctness of the ML-based prediction using formal verification.
- Score: 3.098385261166847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) technologies have been increasingly adopted in Medical
Cyber-Physical Systems (MCPS) to enable smart healthcare. Assuring the safety
and effectiveness of learning-enabled MCPS is challenging, as such systems must
account for diverse patient profiles and physiological dynamics and handle
operational uncertainties. In this paper, we develop a safety assurance case
for ML controllers in learning-enabled MCPS, with an emphasis on establishing
confidence in the ML-based predictions. We present the safety assurance case in
detail for Artificial Pancreas Systems (APS) as a representative application of
learning-enabled MCPS, and provide a detailed analysis by implementing a deep
neural network for the prediction in APS. We check the sufficiency of the ML
data and analyze the correctness of the ML-based prediction using formal
verification. Finally, we outline open research problems based on our
experience in this paper.
Related papers
- Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach [32.15663640443728]
The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits.
Existing verification and validation techniques are often inadequate for these new paradigms.
We propose a roadmap to transition from foundational probabilistic testing to a more rigorous approach capable of delivering formal assurance.
arXiv Detail & Related papers (2023-11-13T14:56:14Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Case Study-Based Approach of Quantum Machine Learning in Cybersecurity:
Quantum Support Vector Machine for Malware Classification and Protection [8.34729912896717]
We design and develop QML-based ten learning modules covering various cybersecurity topics.
In this paper, we utilize quantum support vector machine (QSVM) for malware classification and protection.
We demonstrate our QSVM model and achieve an accuracy of 95% in malware classification and protection.
arXiv Detail & Related papers (2023-06-01T02:04:09Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Robustness Testing of Data and Knowledge Driven Anomaly Detection in
Cyber-Physical Systems [2.088376060651494]
This paper presents preliminary results on evaluating the robustness of ML-based anomaly detection methods in safety-critical CPS.
We test the hypothesis of whether integrating the domain knowledge (e.g., on unsafe system behavior) with the ML models can improve the robustness of anomaly detection without sacrificing accuracy and transparency.
arXiv Detail & Related papers (2022-04-20T02:02:56Z) - The Role of Explainability in Assuring Safety of Machine Learning in
Healthcare [9.462125772941347]
This paper identifies ways in which explainable AI methods can contribute to safety assurance of ML-based systems.
The results are also represented in a safety argument to show where, and in what way, explainable AI methods can contribute to a safety case.
arXiv Detail & Related papers (2021-09-01T09:32:14Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Towards a Robust and Trustworthy Machine Learning System Development [0.09236074230806578]
We present our recent survey on the state-of-the-art ML trustworthiness and technologies from a security engineering perspective.
We then push our studies forward above and beyond a survey by describing a metamodel we created that represents the body of knowledge in a standard and visualized way for ML practitioners.
We propose future research directions motivated by our findings to advance the development of robust and trustworthy ML systems.
arXiv Detail & Related papers (2021-01-08T14:43:58Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.