Inverse Distance Aggregation for Federated Learning with Non-IID Data
- URL: http://arxiv.org/abs/2008.07665v1
- Date: Mon, 17 Aug 2020 23:20:01 GMT
- Title: Inverse Distance Aggregation for Federated Learning with Non-IID Data
- Authors: Yousef Yeganeh, Azade Farshad, Nassir Navab, Shadi Albarqouni
- Abstract summary: Federated learning (FL) has been a promising approach in the field of medical imaging in recent years.
A critical problem in FL, specifically in medical scenarios is to have a more accurate shared model which is robust to noisy and out-of distribution clients.
We propose IDA, a novel adaptive weighting approach for clients based on meta-information which handles unbalanced and non-iid data.
- Score: 48.48922416867067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has been a promising approach in the field of medical
imaging in recent years. A critical problem in FL, specifically in medical
scenarios is to have a more accurate shared model which is robust to noisy and
out-of distribution clients. In this work, we tackle the problem of statistical
heterogeneity in data for FL which is highly plausible in medical data where
for example the data comes from different sites with different scanner
settings. We propose IDA (Inverse Distance Aggregation), a novel adaptive
weighting approach for clients based on meta-information which handles
unbalanced and non-iid data. We extensively analyze and evaluate our method
against the well-known FL approach, Federated Averaging as a baseline.
Related papers
- FedMRL: Data Heterogeneity Aware Federated Multi-agent Deep Reinforcement Learning for Medical Imaging [12.307490659840845]
We introduce FedMRL, a novel multi-agent deep reinforcement learning framework designed to address data heterogeneity.
FedMRL incorporates a novel loss function to facilitate fairness among clients, preventing bias in the final global model.
We assess our approach using two publicly available real-world medical datasets, and the results demonstrate that FedMRL significantly outperforms state-of-the-art techniques.
arXiv Detail & Related papers (2024-07-08T10:10:07Z) - Think Twice Before Selection: Federated Evidential Active Learning for Medical Image Analysis with Domain Shifts [11.562953837452126]
We make the first attempt to assess the informativeness of local data derived from diverse domains.
We propose a novel methodology termed Federated Evidential Active Learning (FEAL) to calibrate the data evaluation under domain shift.
arXiv Detail & Related papers (2023-12-05T08:32:27Z) - Improving Multiple Sclerosis Lesion Segmentation Across Clinical Sites:
A Federated Learning Approach with Noise-Resilient Training [75.40980802817349]
Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area.
We introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions.
We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites.
arXiv Detail & Related papers (2023-08-31T00:36:10Z) - Bayesian Federated Inference for estimating Statistical Models based on
Non-shared Multicenter Data sets [0.0]
Federated Learning (FL) is a machine learning approach that aims to construct from local inferences in separate data centers.
We implement an alternative Bayesian Federated Inference (BFI) framework for multicenter data with the same aim as FL.
We quantify the performance of the proposed methodology on simulated and real life data.
arXiv Detail & Related papers (2023-02-15T14:11:20Z) - Closing the Generalization Gap of Cross-silo Federated Medical Image
Segmentation [66.44449514373746]
Cross-silo federated learning (FL) has attracted much attention in medical imaging analysis with deep learning in recent years.
There can be a gap between the model trained from FL and one from centralized training.
We propose a novel training framework FedSM to avoid client issue and successfully close the drift gap.
arXiv Detail & Related papers (2022-03-18T19:50:07Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Improving Performance of Federated Learning based Medical Image Analysis
in Non-IID Settings using Image Augmentation [1.5469452301122177]
Federated Learning (FL) is a suitable solution for making use of sensitive data belonging to patients, people, companies, or industries that are obligatory to work under rigid privacy constraints.
FL mainly or partially supports data privacy and security issues and provides an alternative to model problems facilitating multiple edge devices or organizations to contribute a training of a global model using a number of local data without having them.
This paper introduces a novel method dynamically balancing the data distributions of clients by augmenting images to address the non-IID data problem of FL.
arXiv Detail & Related papers (2021-12-12T10:05:42Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Auto-FedAvg: Learnable Federated Averaging for Multi-Institutional
Medical Image Segmentation [7.009650174262515]
Federated learning (FL) enables collaborative model training while preserving each participant's privacy.
FedAvg is a standard algorithm that uses fixed weights, often originating from the dataset sizes at each client, to aggregate the distributed learned models on a server during the FL process.
In this work, we design a new data-driven approach, namely Auto-FedAvg, where aggregation weights are dynamically adjusted.
arXiv Detail & Related papers (2021-04-20T18:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.