Review of the AMLAS Methodology for Application in Healthcare
- URL: http://arxiv.org/abs/2209.00421v1
- Date: Thu, 1 Sep 2022 13:00:36 GMT
- Title: Review of the AMLAS Methodology for Application in Healthcare
- Authors: Shakir Laher, Carla Brackstone, Sara Reis, An Nguyen, Sean White,
Ibrahim Habli
- Abstract summary: There is a need to proactively assure the safety of ML to prevent patient safety being compromised.
The Assurance of Machine Learning for use in Autonomous Systems methodology was developed by the Assuring Autonomy International Programme.
This review has appraised the methodology by consulting ML manufacturers to understand if it converges or diverges from their current safety assurance practices.
- Score: 2.6072209210124675
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In recent years, the number of machine learning (ML) technologies gaining
regulatory approval for healthcare has increased significantly allowing them to
be placed on the market. However, the regulatory frameworks applied to them
were originally devised for traditional software, which has largely rule-based
behaviour, compared to the data-driven and learnt behaviour of ML. As the
frameworks are in the process of reformation, there is a need to proactively
assure the safety of ML to prevent patient safety being compromised. The
Assurance of Machine Learning for use in Autonomous Systems (AMLAS) methodology
was developed by the Assuring Autonomy International Programme based on
well-established concepts in system safety. This review has appraised the
methodology by consulting ML manufacturers to understand if it converges or
diverges from their current safety assurance practices, whether there are gaps
and limitations in its structure and if it is fit for purpose when applied to
the healthcare domain. Through this work we offer the view that there is clear
utility for AMLAS as a safety assurance methodology when applied to healthcare
machine learning technologies, although development of healthcare specific
supplementary guidance would benefit those implementing the methodology.
Related papers
- Demystifying Large Language Models for Medicine: A Primer [50.83806796466396]
Large language models (LLMs) represent a transformative class of AI tools capable of revolutionizing various aspects of healthcare.
This tutorial aims to equip healthcare professionals with the tools necessary to effectively integrate LLMs into clinical practice.
arXiv Detail & Related papers (2024-10-24T15:41:56Z) - Beyond One-Time Validation: A Framework for Adaptive Validation of Prognostic and Diagnostic AI-based Medical Devices [55.319842359034546]
Existing approaches often fall short in addressing the complexity of practically deploying these devices.
The presented framework emphasizes the importance of repeating validation and fine-tuning during deployment.
It is positioned within the current US and EU regulatory landscapes.
arXiv Detail & Related papers (2024-09-07T11:13:52Z) - MedISure: Towards Assuring Machine Learning-based Medical Image
Classifiers using Mixup Boundary Analysis [3.1256597361013725]
Machine learning (ML) models are becoming integral in healthcare technologies.
Traditional software assurance techniques rely on fixed code and do not directly apply to ML models.
We present a novel technique called Mix-Up Boundary Analysis (MUBA) that facilitates evaluating image classifiers in terms of prediction fairness.
arXiv Detail & Related papers (2023-11-23T12:47:43Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Towards Developing Safety Assurance Cases for Learning-Enabled Medical
Cyber-Physical Systems [3.098385261166847]
We develop a safety assurance case for Machine Learning controllers in learning-enabled MCPS.
We provide a detailed analysis by implementing a deep neural network for the prediction in Artificial Pancreas Systems.
We check the sufficiency of the ML data and analyze the correctness of the ML-based prediction using formal verification.
arXiv Detail & Related papers (2022-11-23T22:43:48Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - The Role of Explainability in Assuring Safety of Machine Learning in
Healthcare [9.462125772941347]
This paper identifies ways in which explainable AI methods can contribute to safety assurance of ML-based systems.
The results are also represented in a safety argument to show where, and in what way, explainable AI methods can contribute to a safety case.
arXiv Detail & Related papers (2021-09-01T09:32:14Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Guidance on the Assurance of Machine Learning in Autonomous Systems
(AMLAS) [16.579772998870233]
We introduce a methodology for the Assurance of Machine Learning for use in Autonomous Systems (AMLAS)
AMLAS comprises a set of safety case patterns and a process for integrating safety assurance into the development of ML components.
arXiv Detail & Related papers (2021-02-02T15:41:57Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Usable Security for ML Systems in Mental Health: A Framework [2.436681150766912]
This paper introduces a framework to guide and evaluate security-related designs, implementations, and deployments of Machine learning systems in mental health.
We propose new principles and requirements to make security mechanisms usable for end-users of those ML systems in mental health.
We present several concrete scenarios where different usable security cases and profiles in ML-systems in mental health applications are examined and evaluated.
arXiv Detail & Related papers (2020-08-18T04:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.