Vicinal Feature Statistics Augmentation for Federated 3D Medical Volume
Segmentation
- URL: http://arxiv.org/abs/2310.15371v1
- Date: Mon, 23 Oct 2023 21:14:52 GMT
- Title: Vicinal Feature Statistics Augmentation for Federated 3D Medical Volume
Segmentation
- Authors: Yongsong Huang, Wanqing Xie, Mingzhen Li, Mingmei Cheng, Jinzhou Wu,
Weixiao Wang, Jane You, Xiaofeng Liu
- Abstract summary: Federated learning (FL) enables multiple client medical institutes collaboratively train a deep learning (DL) model with privacy protection.
However, the performance of FL can be constrained by the limited availability of labeled data in small institutes and the heterogeneous (i.e., non-i.i.d.) data distribution across institutes.
We develop a vicinal feature-level data augmentation scheme to efficiently alleviate the local feature shift and facilitate collaborative training for privacy-aware FL segmentation.
- Score: 17.096806029281385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) enables multiple client medical institutes
collaboratively train a deep learning (DL) model with privacy protection.
However, the performance of FL can be constrained by the limited availability
of labeled data in small institutes and the heterogeneous (i.e., non-i.i.d.)
data distribution across institutes. Though data augmentation has been a proven
technique to boost the generalization capabilities of conventional centralized
DL as a "free lunch", its application in FL is largely underexplored. Notably,
constrained by costly labeling, 3D medical segmentation generally relies on
data augmentation. In this work, we aim to develop a vicinal feature-level data
augmentation (VFDA) scheme to efficiently alleviate the local feature shift and
facilitate collaborative training for privacy-aware FL segmentation. We take
both the inner- and inter-institute divergence into consideration, without the
need for cross-institute transfer of raw data or their mixup. Specifically, we
exploit the batch-wise feature statistics (e.g., mean and standard deviation)
in each institute to abstractly represent the discrepancy of data, and model
each feature statistic probabilistically via a Gaussian prototype, with the
mean corresponding to the original statistic and the variance quantifying the
augmentation scope. From the vicinal risk minimization perspective, novel
feature statistics can be drawn from the Gaussian distribution to fulfill
augmentation. The variance is explicitly derived by the data bias in each
individual institute and the underlying feature statistics characterized by all
participating institutes. The added-on VFDA consistently yielded marked
improvements over six advanced FL methods on both 3D brain tumor and cardiac
segmentation.
Related papers
- Anatomical 3D Style Transfer Enabling Efficient Federated Learning with Extremely Low Communication Costs [1.3654846342364306]
We propose a novel federated learning (FL) approach that utilizes 3D style transfer for the multi-organ segmentation task.
By mixing styles based on these clusters, it preserves the anatomical information and leads models to learn intra-organ diversity.
Experiments indicate that our method can maintain its accuracy even in cases where the communication cost is highly limited.
arXiv Detail & Related papers (2024-10-26T07:00:40Z) - FedGS: Federated Gradient Scaling for Heterogeneous Medical Image Segmentation [0.4499833362998489]
We propose FedGS, a novel FL aggregation method, to improve segmentation performance on small, under-represented targets.
FedGS demonstrates superior performance over FedAvg, particularly for small lesions, across PolypGen and LiTS datasets.
arXiv Detail & Related papers (2024-08-21T15:26:21Z) - A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentation [5.011091042850546]
Adapting foundation models for medical image analysis requires finetuning them on a considerable amount of data.
collecting task-specific medical data for such finetuning at a central location raises many privacy concerns.
Although Federated learning (FL) provides an effective means for training on private decentralized data, communication costs in federating large foundation models can quickly become a significant bottleneck.
arXiv Detail & Related papers (2024-07-31T16:48:06Z) - Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - StatAvg: Mitigating Data Heterogeneity in Federated Learning for Intrusion Detection Systems [22.259297167311964]
Federated learning (FL) is a decentralized learning technique that enables devices to collaboratively build a shared Machine Leaning (ML) or Deep Learning (DL) model without revealing their raw data to a third party.
Due to its privacy-preserving nature, FL has sparked widespread attention for building Intrusion Detection Systems (IDS) within the realm of cybersecurity.
We propose an effective method called Statistical Averaging (StatAvg) to alleviate non-independently and identically (non-iid) distributed features across local clients' data in FL.
arXiv Detail & Related papers (2024-05-20T14:41:59Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - A Simple Data Augmentation for Feature Distribution Skewed Federated
Learning [12.636154758643757]
Federated learning (FL) facilitates collaborative learning among multiple clients in a distributed manner, while ensuring privacy protection.
In this paper, we focus on the feature distribution skewed FL scenario, which is widespread in real-world applications.
We propose FedRDN, a simple yet remarkably effective data augmentation method for feature distribution skewed FL.
arXiv Detail & Related papers (2023-06-14T05:46:52Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.