Privacy-preserving Machine Learning for Medical Image Classification
- URL: http://arxiv.org/abs/2108.12816v1
- Date: Sun, 29 Aug 2021 10:50:18 GMT
- Title: Privacy-preserving Machine Learning for Medical Image Classification
- Authors: Shreyansh Singh and K.K. Shukla
- Abstract summary: Image classification is an important use case of Machine Learning (ML) in the medical industry.
There is a privacy concern when using automated systems like these.
In this study, we aim to solve these problems in the context of a medical image classification problem of detection of pneumonia by examining chest x-ray images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rising use of Machine Learning (ML) and Deep Learning (DL) in
various industries, the medical industry is also not far behind. A very simple
yet extremely important use case of ML in this industry is for image
classification. This is important for doctors to help them detect certain
diseases timely, thereby acting as an aid to reduce chances of human judgement
error. However, when using automated systems like these, there is a privacy
concern as well. Attackers should not be able to get access to the medical
records and images of the patients. It is also required that the model be
secure, and that the data that is sent to the model and the predictions that
are received both should not be revealed to the model in clear text.
In this study, we aim to solve these problems in the context of a medical
image classification problem of detection of pneumonia by examining chest x-ray
images.
Related papers
- Medical Multimodal Model Stealing Attacks via Adversarial Domain Alignment [79.41098832007819]
Medical multimodal large language models (MLLMs) are becoming an instrumental part of healthcare systems.
As medical data is scarce and protected by privacy regulations, medical MLLMs represent valuable intellectual property.
We introduce Adversarial Domain Alignment (ADA-STEAL), the first stealing attack against medical MLLMs.
arXiv Detail & Related papers (2025-02-04T16:04:48Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Few Shot Learning for Medical Imaging: A Comparative Analysis of
Methodologies and Formal Mathematical Framework [0.0]
scarcity of problem-dependent training data has become a larger issue in the way of easy application of deep learning in the medical sector.
Few hot learning algorithms determine to solve the data limitation problems by extracting the characteristics from a small dataset.
In the medical sector, there is frequently a shortage of available datasets in respect of some confidential diseases.
arXiv Detail & Related papers (2023-05-08T01:05:22Z) - A Trustworthy Framework for Medical Image Analysis with Deep Learning [71.48204494889505]
TRUDLMIA is a trustworthy deep learning framework for medical image analysis.
It is anticipated that the framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises including COVID-19.
arXiv Detail & Related papers (2022-12-06T05:30:22Z) - Multi-Label Chest X-Ray Classification via Deep Learning [0.0]
The goal of this paper is to develop a lightweight solution to detect 14 different chest conditions from an X ray image.
Along with the image features, we are also going to use non-image features available in the data such as X-ray view type, age, gender etc.
Our aim is to improve upon previous work, expand prediction to 14 diseases and provide insight for future chest radiography research.
arXiv Detail & Related papers (2022-11-27T20:27:55Z) - Application of Homomorphic Encryption in Medical Imaging [60.51436886110803]
We show how HE can be used to make predictions over medical images while preventing unauthorized secondary use of data.
We report some experiments using 3D chest CT-Scans for a nodule detection task.
arXiv Detail & Related papers (2021-10-12T19:57:12Z) - Anomaly Detection in Medical Imaging -- A Mini Review [0.2455468619225742]
This paper uses a semi-exhaustive literature review of relevant anomaly detection papers in medical imaging to cluster into applications.
The main results showed that the current research is mostly motivated by reducing the need for labelled data.
Also, the successful and substantial amount of research in the brain MRI domain shows the potential for applications in further domains like OCT and chest X-ray.
arXiv Detail & Related papers (2021-08-25T11:45:40Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models [8.853343040790795]
Jekyll is a neural style transfer framework that takes as input a biomedical image of a patient and translates it to a new image that indicates an attacker-chosen disease condition.
We show that these attacks manage to mislead both medical professionals and algorithmic detection schemes.
We also investigate defensive measures based on machine learning to detect images generated by Jekyll.
arXiv Detail & Related papers (2021-04-05T18:23:36Z) - IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography [20.665935997959025]
Interpretability in machine learning models is important in high-stakes decisions.
We present a framework for interpretable machine learning-based mammography.
arXiv Detail & Related papers (2021-03-23T05:00:21Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.