An advanced data fabric architecture leveraging homomorphic encryption
and federated learning
- URL: http://arxiv.org/abs/2402.09795v1
- Date: Thu, 15 Feb 2024 08:50:36 GMT
- Title: An advanced data fabric architecture leveraging homomorphic encryption
and federated learning
- Authors: Sakib Anwar Rieyan, Md. Raisul Kabir News, A.B.M. Muntasir Rahman,
Sadia Afrin Khan, Sultan Tasneem Jawad Zaarif, Md. Golam Rabiul Alam,
Mohammad Mehedi Hassan, Michele Ianni, Giancarlo Fortino
- Abstract summary: This paper introduces a secure approach for medical image analysis using federated learning and partially homomorphic encryption within a distributed data fabric architecture.
The study demonstrates the method's effectiveness through a case study on pituitary tumor classification, achieving a significant level of accuracy.
- Score: 10.779491433438144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data fabric is an automated and AI-driven data fusion approach to accomplish
data management unification without moving data to a centralized location for
solving complex data problems. In a Federated learning architecture, the global
model is trained based on the learned parameters of several local models that
eliminate the necessity of moving data to a centralized repository for machine
learning. This paper introduces a secure approach for medical image analysis
using federated learning and partially homomorphic encryption within a
distributed data fabric architecture. With this method, multiple parties can
collaborate in training a machine-learning model without exchanging raw data
but using the learned or fused features. The approach complies with laws and
regulations such as HIPAA and GDPR, ensuring the privacy and security of the
data. The study demonstrates the method's effectiveness through a case study on
pituitary tumor classification, achieving a significant level of accuracy.
However, the primary focus of the study is on the development and evaluation of
federated learning and partially homomorphic encryption as tools for secure
medical image analysis. The results highlight the potential of these techniques
to be applied to other privacy-sensitive domains and contribute to the growing
body of research on secure and privacy-preserving machine learning.
Related papers
- FedCL-Ensemble Learning: A Framework of Federated Continual Learning with Ensemble Transfer Learning Enhanced for Alzheimer's MRI Classifications while Preserving Privacy [0.0]
This research work primary uses transfer learning models such as ResNet, ImageNet, and VNet to extract high-level features from medical image data.
The proposed model was built using federated learning without sharing sensitive patient data.
arXiv Detail & Related papers (2024-11-15T13:49:22Z) - EPIC: Enhancing Privacy through Iterative Collaboration [4.199844472131922]
Traditional machine learning techniques require centralized data collection and processing.
Privacy, ownership, and stringent regulation issues exist when pooling medical data into centralized storage.
The Federated learning (FL) approach overcomes such issues by setting up a central aggregator server and a shared global model.
arXiv Detail & Related papers (2024-11-07T20:10:34Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Homomorphic Encryption and Federated Learning based Privacy-Preserving
CNN Training: COVID-19 Detection Use-Case [0.41998444721319217]
This paper proposes a privacy-preserving federated learning algorithm for medical data using homomorphic encryption.
The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries.
arXiv Detail & Related papers (2022-04-16T08:38:35Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - Scaling Neuroscience Research using Federated Learning [1.2234742322758416]
Machine learning approaches that require data to be copied to a single location are hampered by the challenges of data sharing.
Federated Learning is a promising approach to learn a joint model over data silos.
This architecture does not share any subject data across sites, only aggregated parameters, often in encrypted environments.
arXiv Detail & Related papers (2021-02-16T20:30:04Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Multi-site fMRI Analysis Using Privacy-preserving Federated Learning and
Domain Adaptation: ABIDE Results [13.615292855384729]
To train a high-quality deep learning model, the aggregation of a significant amount of patient information is required.
Due to the need to protect the privacy of patient data, it is hard to assemble a central database from multiple institutions.
Federated learning allows for population-level models to be trained without centralizing entities' data.
arXiv Detail & Related papers (2020-01-16T04:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.