Federated Self-Supervised Contrastive Learning and Masked Autoencoder
for Dermatological Disease Diagnosis
- URL: http://arxiv.org/abs/2208.11278v1
- Date: Wed, 24 Aug 2022 02:49:35 GMT
- Title: Federated Self-Supervised Contrastive Learning and Masked Autoencoder
for Dermatological Disease Diagnosis
- Authors: Yawen Wu, Dewen Zeng, Zhepeng Wang, Yi Sheng, Lei Yang, Alaina J.
James, Yiyu Shi, and Jingtong Hu
- Abstract summary: In dermatological disease diagnosis, the private data collected by mobile dermatology assistants exist on distributed mobile devices of patients.
We propose two federated self-supervised learning frameworks for dermatological disease diagnosis with limited labels.
Experiments on dermatological disease datasets show superior accuracy of the proposed frameworks over state-of-the-arts.
- Score: 15.20791611477636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In dermatological disease diagnosis, the private data collected by mobile
dermatology assistants exist on distributed mobile devices of patients.
Federated learning (FL) can use decentralized data to train models while
keeping data local. Existing FL methods assume all the data have labels.
However, medical data often comes without full labels due to high labeling
costs. Self-supervised learning (SSL) methods, contrastive learning (CL) and
masked autoencoders (MAE), can leverage the unlabeled data to pre-train models,
followed by fine-tuning with limited labels. However, combining SSL and FL has
unique challenges. For example, CL requires diverse data but each device only
has limited data. For MAE, while Vision Transformer (ViT) based MAE has higher
accuracy over CNNs in centralized learning, MAE's performance in FL with
unlabeled data has not been investigated. Besides, the ViT synchronization
between the server and clients is different from traditional CNNs. Therefore,
special synchronization methods need to be designed. In this work, we propose
two federated self-supervised learning frameworks for dermatological disease
diagnosis with limited labels. The first one features lower computation costs,
suitable for mobile devices. The second one features high accuracy and fits
high-performance servers. Based on CL, we proposed federated contrastive
learning with feature sharing (FedCLF). Features are shared for diverse
contrastive information without sharing raw data for privacy. Based on MAE, we
proposed FedMAE. Knowledge split separates the global and local knowledge
learned from each client. Only global knowledge is aggregated for higher
generalization performance. Experiments on dermatological disease datasets show
superior accuracy of the proposed frameworks over state-of-the-arts.
Related papers
- Histopathological Image Classification and Vulnerability Analysis using
Federated Learning [1.104960878651584]
A global model sends its copy to all clients who train these copies, and the clients send the updates (weights) back to it.
Data privacy is protected during training, as it is conducted locally on the clients' devices.
However, the global model is susceptible to data poisoning attacks.
arXiv Detail & Related papers (2023-10-11T10:55:14Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - Distributed Contrastive Learning for Medical Image Segmentation [16.3860181959878]
Supervised deep learning needs a large amount of labeled data to achieve high performance.
In medical imaging analysis, each site may only have a limited amount of data and labels, which makes learning ineffective.
We propose two federated self-supervised learning frameworks for medical image segmentation with limited annotations.
arXiv Detail & Related papers (2022-08-07T20:47:05Z) - Federated Contrastive Learning for Volumetric Medical Image Segmentation [16.3860181959878]
Federated learning (FL) can help in this regard by learning a shared model while keeping training data local for privacy.
Traditional FL requires fully-labeled data for training, which is inconvenient or sometimes infeasible to obtain.
In this work, we propose an FCL framework for volumetric medical image segmentation with limited annotations.
arXiv Detail & Related papers (2022-04-23T03:47:23Z) - Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients [98.22390453672499]
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
arXiv Detail & Related papers (2022-04-07T09:12:00Z) - Federated Contrastive Learning for Dermatological Disease Diagnosis via
On-device Learning [15.862924197017264]
We propose an on-device framework for dermatological disease diagnosis with limited labels.
The proposed framework effectively improves the recall and precision of dermatological disease diagnosis compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-02-14T01:11:44Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - FedSLD: Federated Learning with Shared Label Distribution for Medical
Image Classification [6.0088002781256185]
We propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks.
FedSLD adjusts the contribution of each data sample to the local objective during optimization given knowledge of the distribution.
Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms.
arXiv Detail & Related papers (2021-10-15T21:38:25Z) - Federated Semi-supervised Medical Image Classification via Inter-client
Relation Matching [58.26619456972598]
Federated learning (FL) has emerged with increasing popularity to collaborate distributed medical institutions for training deep networks.
This paper studies a practical yet challenging FL problem, named textitFederated Semi-supervised Learning (FSSL)
We present a novel approach for this problem, which improves over traditional consistency regularization mechanism with a new inter-client relation matching scheme.
arXiv Detail & Related papers (2021-06-16T07:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.