Comparison of Privacy-Preserving Distributed Deep Learning Methods in
Healthcare
- URL: http://arxiv.org/abs/2012.12591v1
- Date: Wed, 23 Dec 2020 10:45:52 GMT
- Title: Comparison of Privacy-Preserving Distributed Deep Learning Methods in
Healthcare
- Authors: Manish Gawali, Arvind C S, Shriya Suryavanshi, Harshit Madaan, Ashrika
Gaikwad, Bhanu Prakash KN, Viraj Kulkarni, Aniruddha Pant
- Abstract summary: In this paper, we compare three privacy-preserving distributed learning techniques: federated learning, split learning, and SplitFed.
We use these techniques to develop binary classification models for detecting tuberculosis from chest X-rays.
We propose a novel distributed learning architecture called SplitFedv3, which performs better than split learning and SplitFedv2 in our experiments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we compare three privacy-preserving distributed learning
techniques: federated learning, split learning, and SplitFed. We use these
techniques to develop binary classification models for detecting tuberculosis
from chest X-rays and compare them in terms of classification performance,
communication and computational costs, and training time. We propose a novel
distributed learning architecture called SplitFedv3, which performs better than
split learning and SplitFedv2 in our experiments. We also propose alternate
mini-batch training, a new training technique for split learning, that performs
better than alternate client training, where clients take turns to train a
model.
Related papers
- FedCAR: Cross-client Adaptive Re-weighting for Generative Models in Federated Learning [3.7088276910640365]
Federated learning is a privacy-preserving solution for training distributed datasets across data centers.
We propose a novel algorithm aimed at improving the performance of generative models within FL.
Experimental results on three public chest X-ray datasets show superior performance in medical image generation.
arXiv Detail & Related papers (2024-12-16T05:43:14Z) - Distribution Shift Matters for Knowledge Distillation with Webly
Collected Images [91.66661969598755]
We propose a novel method dubbed Knowledge Distillation between Different Distributions" (KD$3$)
We first dynamically select useful training instances from the webly collected data according to the combined predictions of teacher network and student network.
We also build a new contrastive learning block called MixDistribution to generate perturbed data with a new distribution for instance alignment.
arXiv Detail & Related papers (2023-07-21T10:08:58Z) - Privacy and Efficiency of Communications in Federated Split Learning [5.902531418542073]
We propose a new hybrid Federated Split Learning architecture that combines the efficiency and privacy benefits of both.
Our evaluation demonstrates how our hybrid Federated Split Learning approach can lower the amount of processing power required by each client running a distributed learning system.
We also discuss the resiliency of our approach to deep learning privacy inference attacks and compare our solution to other recently proposed benchmarks.
arXiv Detail & Related papers (2023-01-04T21:16:55Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - CXR-FL: Deep Learning-based Chest X-ray Image Analysis Using Federated
Learning [0.0]
We present an evaluation of deep learning-based models for chest X-ray image analysis using the federated learning method.
We show that classification models perform worse if trained on a region of interest reduced to segmentation of the lung compared to the full image.
arXiv Detail & Related papers (2022-04-11T15:47:54Z) - Partner-Assisted Learning for Few-Shot Image Classification [54.66864961784989]
Few-shot Learning has been studied to mimic human visual capabilities and learn effective models without the need of exhaustive human annotation.
In this paper, we focus on the design of training strategy to obtain an elemental representation such that the prototype of each novel class can be estimated from a few labeled samples.
We propose a two-stage training scheme, which first trains a partner encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a main encoder by aligning its outputs with soft-anchors while attempting to maximize classification performance.
arXiv Detail & Related papers (2021-09-15T22:46:19Z) - Jigsaw Clustering for Unsupervised Visual Representation Learning [68.09280490213399]
We propose a new jigsaw clustering pretext task in this paper.
Our method makes use of information from both intra- and inter-images.
It is even comparable to the contrastive learning methods when only half of training batches are used.
arXiv Detail & Related papers (2021-04-01T08:09:26Z) - Vulnerability Due to Training Order in Split Learning [0.0]
In split learning, an additional privacy-preserving algorithm called no-peek algorithm can be incorporated, which is robust to adversarial attacks.
We show that the model trained using the data of all clients does not perform well on the client's data which was considered earliest in a round for training the model.
We also demonstrate that the SplitFedv3 algorithm mitigates this problem while still leveraging the privacy benefits provided by split learning.
arXiv Detail & Related papers (2021-03-26T06:30:54Z) - Interleaving Learning, with Application to Neural Architecture Search [12.317568257671427]
We propose a novel machine learning framework referred to as interleaving learning (IL)
In our framework, a set of models collaboratively learn a data encoder in an interleaving fashion.
We apply interleaving learning to search neural architectures for image classification on CIFAR-10, CIFAR-100, and ImageNet.
arXiv Detail & Related papers (2021-03-12T00:54:22Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.