Secure and Robust Machine Learning for Healthcare: A Survey
- URL: http://arxiv.org/abs/2001.08103v1
- Date: Tue, 21 Jan 2020 08:12:36 GMT
- Title: Secure and Robust Machine Learning for Healthcare: A Survey
- Authors: Adnan Qayyum, Junaid Qadir, Muhammad Bilal, and Ala Al-Fuqaha
- Abstract summary: We present an overview of various application areas in healthcare that leverage machine learning techniques from security and privacy point of view.
In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications.
- Score: 5.7890697111442435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed widespread adoption of machine learning (ML)/deep
learning (DL) techniques due to their superior performance for a variety of
healthcare applications ranging from the prediction of cardiac arrest from
one-dimensional heart signals to computer-aided diagnosis (CADx) using
multi-dimensional medical images. Notwithstanding the impressive performance of
ML/DL, there are still lingering doubts regarding the robustness of ML/DL in
healthcare settings (which is traditionally considered quite challenging due to
the myriad security and privacy issues involved), especially in light of recent
results that have shown that ML/DL are vulnerable to adversarial attacks. In
this paper, we present an overview of various application areas in healthcare
that leverage such techniques from security and privacy point of view and
present associated challenges. In addition, we present potential methods to
ensure secure and privacy-preserving ML for healthcare applications. Finally,
we provide insight into the current research challenges and promising
directions for future research.
Related papers
- Multimodal Federated Learning in Healthcare: a Review [5.983768682145731]
Federated Learning (FL) provides a decentralized mechanism where data need not be consolidated.
This paper outlines the current state-of-the-art approaches to Multimodal Federated Learning (MMFL) within the healthcare domain.
It aims to bridge the gap between cutting-edge AI technology and the imperative need for patient data privacy in healthcare applications.
arXiv Detail & Related papers (2023-10-14T19:43:06Z) - Redefining Digital Health Interfaces with Large Language Models [69.02059202720073]
Large Language Models (LLMs) have emerged as general-purpose models with the ability to process complex information.
We show how LLMs can provide a novel interface between clinicians and digital technologies.
We develop a new prognostic tool using automated machine learning.
arXiv Detail & Related papers (2023-10-05T14:18:40Z) - Review of deep learning in healthcare [0.0]
This research examines deep learning methods used in healthcare systems via an examination of cutting-edge network designs, applications, and market trends.
The initial objective is to provide in-depth insight into the deployment of deep learning models in healthcare solutions.
And last, to outline the current unresolved issues and potential directions.
arXiv Detail & Related papers (2023-10-01T16:58:20Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Privacy-preserving machine learning for healthcare: open challenges and
future perspectives [72.43506759789861]
We conduct a review of recent literature concerning Privacy-Preserving Machine Learning (PPML) for healthcare.
We primarily focus on privacy-preserving training and inference-as-a-service.
The aim of this review is to guide the development of private and efficient ML models in healthcare.
arXiv Detail & Related papers (2023-03-27T19:20:51Z) - Towards Developing Safety Assurance Cases for Learning-Enabled Medical
Cyber-Physical Systems [3.098385261166847]
We develop a safety assurance case for Machine Learning controllers in learning-enabled MCPS.
We provide a detailed analysis by implementing a deep neural network for the prediction in Artificial Pancreas Systems.
We check the sufficiency of the ML data and analyze the correctness of the ML-based prediction using formal verification.
arXiv Detail & Related papers (2022-11-23T22:43:48Z) - A Survey on Computer Vision based Human Analysis in the COVID-19 Era [58.79053747159797]
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals.
Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications.
These developments triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication
arXiv Detail & Related papers (2022-11-07T17:20:39Z) - Federated Learning for Medical Applications: A Taxonomy, Current Trends,
Challenges, and Future Research Directions [9.662980267339375]
We focus on medical applications of acFL, particularly in the context of global cancer diagnosis.
Recent developments in acFL have made it possible to train complex machine-learned models in a distributed manner.
arXiv Detail & Related papers (2022-08-05T21:41:15Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.