Confidential and Protected Disease Classifier using Fully Homomorphic Encryption
- URL: http://arxiv.org/abs/2405.02790v1
- Date: Sun, 5 May 2024 02:10:00 GMT
- Title: Confidential and Protected Disease Classifier using Fully Homomorphic Encryption
- Authors: Aditya Malik, Nalini Ratha, Bharat Yalavarthi, Tilak Sharma, Arjun Kaushik, Charanjit Jutla,
- Abstract summary: Many users seek potential causes on platforms like ChatGPT or Bard before consulting a medical professional for their ailment.
Despite the convenience of such platforms, sharing personal medical data online poses risks, including the presence of malicious platforms.
We propose a novel framework combining FHE and Deep Learning for a secure and private diagnosis system.
- Score: 0.09424565541639365
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the rapid surge in the prevalence of Large Language Models (LLMs), individuals are increasingly turning to conversational AI for initial insights across various domains, including health-related inquiries such as disease diagnosis. Many users seek potential causes on platforms like ChatGPT or Bard before consulting a medical professional for their ailment. These platforms offer valuable benefits by streamlining the diagnosis process, alleviating the significant workload of healthcare practitioners, and saving users both time and money by avoiding unnecessary doctor visits. However, Despite the convenience of such platforms, sharing personal medical data online poses risks, including the presence of malicious platforms or potential eavesdropping by attackers. To address privacy concerns, we propose a novel framework combining FHE and Deep Learning for a secure and private diagnosis system. Operating on a question-and-answer-based model akin to an interaction with a medical practitioner, this end-to-end secure system employs Fully Homomorphic Encryption (FHE) to handle encrypted input data. Given FHE's computational constraints, we adapt deep neural networks and activation functions to the encryted domain. Further, we also propose a faster algorithm to compute summation of ciphertext elements. Through rigorous experiments, we demonstrate the efficacy of our approach. The proposed framework achieves strict security and privacy with minimal loss in performance.
Related papers
- Which Client is Reliable?: A Reliable and Personalized Prompt-based Federated Learning for Medical Image Question Answering [51.26412822853409]
We present a novel personalized federated learning (pFL) method for medical visual question answering (VQA) models.
Our method introduces learnable prompts into a Transformer architecture to efficiently train it on diverse medical datasets without massive computational costs.
arXiv Detail & Related papers (2024-10-23T00:31:17Z) - Explainable Machine Learning-Based Security and Privacy Protection Framework for Internet of Medical Things Systems [1.8434042562191815]
The Internet of Medical Things (IoMT) transcends traditional medical boundaries, enabling a transition from reactive treatment to proactive prevention.
Its benefits are countered by significant security challenges that endanger the lives of its users due to the sensitivity and value of the processed data.
A new framework for Intrusion Detection Systems (IDS) is introduced, leveraging Artificial Neural Networks (ANN) for intrusion detection while utilizing Federated Learning (FL) for privacy preservation.
arXiv Detail & Related papers (2024-03-14T11:57:26Z) - Prompt-based Personalized Federated Learning for Medical Visual Question
Answering [56.002377299811656]
We present a novel prompt-based personalized federated learning (pFL) method to address data heterogeneity and privacy concerns.
We regard medical datasets from different organs as clients and use pFL to train personalized transformer-based VQA models for each client.
arXiv Detail & Related papers (2024-02-15T03:09:54Z) - Protect Your Score: Contact Tracing With Differential Privacy Guarantees [68.53998103087508]
We argue that privacy concerns currently hold deployment back.
We propose a contact tracing algorithm with differential privacy guarantees against this attack.
Especially for realistic test scenarios, we achieve a two to ten-fold reduction in the infection rate of the virus.
arXiv Detail & Related papers (2023-12-18T11:16:33Z) - Privacy-Preserving Medical Image Classification through Deep Learning
and Matrix Decomposition [0.0]
Deep learning (DL) solutions have been extensively researched in the medical domain in recent years.
The usage of health-related data is strictly regulated, processing medical records outside the hospital environment demands robust data protection measures.
In this paper, we use singular value decomposition (SVD) and principal component analysis (PCA) to obfuscate the medical images before employing them in the DL analysis.
The capability of DL algorithms to extract relevant information from secured data is assessed on a task of angiographic view classification based on obfuscated frames.
arXiv Detail & Related papers (2023-08-31T08:21:09Z) - DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4 [80.36535668574804]
We develop a novel GPT4-enabled de-identification framework (DeID-GPT")
Our developed DeID-GPT showed the highest accuracy and remarkable reliability in masking private information from the unstructured medical text.
This study is one of the earliest to utilize ChatGPT and GPT-4 for medical text data processing and de-identification.
arXiv Detail & Related papers (2023-03-20T11:34:37Z) - Homomorphic Encryption and Federated Learning based Privacy-Preserving
CNN Training: COVID-19 Detection Use-Case [0.41998444721319217]
This paper proposes a privacy-preserving federated learning algorithm for medical data using homomorphic encryption.
The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries.
arXiv Detail & Related papers (2022-04-16T08:38:35Z) - NeuraCrypt: Hiding Private Health Data via Random Neural Networks for
Public Training [64.54200987493573]
We propose NeuraCrypt, a private encoding scheme based on random deep neural networks.
NeuraCrypt encodes raw patient data using a randomly constructed neural network known only to the data-owner.
We show that NeuraCrypt achieves competitive accuracy to non-private baselines on a variety of x-ray tasks.
arXiv Detail & Related papers (2021-06-04T13:42:21Z) - Defending Medical Image Diagnostics against Privacy Attacks using
Generative Methods [10.504951891644474]
We develop and evaluate a privacy defense protocol based on using a generative adversarial network (GAN)
We validate the proposed method on retinal diagnostics AI used for diabetic retinopathy that bears the risk of possibly leaking private information.
arXiv Detail & Related papers (2021-03-04T15:02:57Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Verifiable Proof of Health using Public Key Cryptography [13.992089238512678]
In the current pandemic, testing continues to be the most important tool for monitoring and curbing the disease spread.
The ability to verify the testing status is pertinent as public places prepare to safely open.
Recent advances in cryptographic tools have made it possible to build a secure and resilient digital-id system.
arXiv Detail & Related papers (2020-12-04T22:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.