MedISure: Towards Assuring Machine Learning-based Medical Image
Classifiers using Mixup Boundary Analysis
- URL: http://arxiv.org/abs/2311.13978v1
- Date: Thu, 23 Nov 2023 12:47:43 GMT
- Title: MedISure: Towards Assuring Machine Learning-based Medical Image
Classifiers using Mixup Boundary Analysis
- Authors: Adam Byfield, William Poulett, Ben Wallace, Anusha Jose, Shatakshi
Tyagi, Smita Shembekar, Adnan Qayyum, Junaid Qadir, and Muhammad Bilal
- Abstract summary: Machine learning (ML) models are becoming integral in healthcare technologies.
Traditional software assurance techniques rely on fixed code and do not directly apply to ML models.
We present a novel technique called Mix-Up Boundary Analysis (MUBA) that facilitates evaluating image classifiers in terms of prediction fairness.
- Score: 3.1256597361013725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) models are becoming integral in healthcare
technologies, presenting a critical need for formal assurance to validate their
safety, fairness, robustness, and trustworthiness. These models are inherently
prone to errors, potentially posing serious risks to patient health and could
even cause irreparable harm. Traditional software assurance techniques rely on
fixed code and do not directly apply to ML models since these algorithms are
adaptable and learn from curated datasets through a training process. However,
adapting established principles, such as boundary testing using synthetic test
data can effectively bridge this gap. To this end, we present a novel technique
called Mix-Up Boundary Analysis (MUBA) that facilitates evaluating image
classifiers in terms of prediction fairness. We evaluated MUBA for two
important medical imaging tasks -- brain tumour classification and breast
cancer classification -- and achieved promising results. This research aims to
showcase the importance of adapting traditional assurance principles for
assessing ML models to enhance the safety and reliability of healthcare
technologies. To facilitate future research, we plan to publicly release our
code for MUBA.
Related papers
- Empowering Healthcare through Privacy-Preserving MRI Analysis [3.6394715554048234]
We introduce the Ensemble-Based Federated Learning (EBFL) Framework.
EBFL framework deviates from the conventional approach by emphasizing model features over sharing sensitive patient data.
We have achieved remarkable precision in the classification of brain tumors, including glioma, meningioma, pituitary, and non-tumor instances.
arXiv Detail & Related papers (2024-03-14T19:51:18Z) - Evaluation of Predictive Reliability to Foster Trust in Artificial
Intelligence. A case study in Multiple Sclerosis [0.34473740271026115]
Spotting Machine Learning failures is of paramount importance when ML predictions are used to drive clinical decisions.
We propose a simple approach that can be used in the deployment phase of any ML model to suggest whether to trust predictions or not.
Our method holds the promise to provide effective support to clinicians by spotting potential ML failures during deployment.
arXiv Detail & Related papers (2024-02-27T14:48:07Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Uncertainty-informed Mutual Learning for Joint Medical Image
Classification and Segmentation [27.67559996444668]
We propose a novel Uncertainty-informed Mutual Learning (UML) framework for reliable and interpretable medical image analysis.
Our framework introduces reliability to joint classification and segmentation tasks, leveraging mutual learning with uncertainty to improve performance.
Our has the potential to explore the development of more reliable and explainable medical image analysis models.
arXiv Detail & Related papers (2023-03-17T15:23:15Z) - Interpretability from a new lens: Integrating Stratification and Domain
knowledge for Biomedical Applications [0.0]
This paper proposes a novel computational strategy for the stratification of biomedical problem datasets into k-fold cross-validation (CVs)
This approach can improve model stability, establish trust, and provide explanations for outcomes generated by trained IML models.
arXiv Detail & Related papers (2023-03-15T12:02:02Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Distillation to Enhance the Portability of Risk Models Across
Institutions with Large Patient Claims Database [12.452703677540505]
We investigate the practicality of model portability through a cross-site evaluation of readmission prediction models.
We apply a recurrent neural network, augmented with self-attention and blended with expert features, to build readmission prediction models.
Our experiments show that direct application of ML models trained at one institution and tested at another institution perform worse than models trained and tested at the same institution.
arXiv Detail & Related papers (2022-07-06T05:26:32Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.