Federated Learning for Breast Density Classification: A Real-World
Implementation
- URL: http://arxiv.org/abs/2009.01871v3
- Date: Tue, 20 Oct 2020 13:46:55 GMT
- Title: Federated Learning for Breast Density Classification: A Real-World
Implementation
- Authors: Holger R. Roth, Ken Chang, Praveer Singh, Nir Neumark, Wenqi Li,
Vikash Gupta, Sharut Gupta, Liangqiong Qu, Alvin Ihsani, Bernardo C. Bizzo,
Yuhong Wen, Varun Buch, Meesam Shah, Felipe Kitamura, Matheus Mendon\c{c}a,
Vitor Lavor, Ahmed Harouni, Colin Compas, Jesse Tetreault, Prerna Dogra, Yan
Cheng, Selnur Erdal, Richard White, Behrooz Hashemian, Thomas Schultz, Miao
Zhang, Adam McCarthy, B. Min Yun, Elshaimaa Sharaf, Katharina V. Hoebel, Jay
B. Patel, Bryan Chen, Sean Ko, Evan Leibovitz, Etta D. Pisano, Laura Coombs,
Daguang Xu, Keith J. Dreyer, Ittai Dayan, Ram C. Naidu, Mona Flores, Daniel
Rubin, Jayashree Kalpathy-Cramer
- Abstract summary: Seven clinical institutions from across the world joined this FL effort to train a model for breast density classification based on Breast Imaging, Reporting & Data System (BI-RADS)
We show that despite substantial differences among the datasets from all sites, we can successfully train AI models in federation.
The results show that models trained using FL perform 6.3% on average better than their counterparts trained on an institute's local data alone.
- Score: 19.03378677235258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building robust deep learning-based models requires large quantities of
diverse training data. In this study, we investigate the use of federated
learning (FL) to build medical imaging classification models in a real-world
collaborative setting. Seven clinical institutions from across the world joined
this FL effort to train a model for breast density classification based on
Breast Imaging, Reporting & Data System (BI-RADS). We show that despite
substantial differences among the datasets from all sites (mammography system,
class distribution, and data set size) and without centralizing data, we can
successfully train AI models in federation. The results show that models
trained using FL perform 6.3% on average better than their counterparts trained
on an institute's local data alone. Furthermore, we show a 45.8% relative
improvement in the models' generalizability when evaluated on the other
participating sites' testing data.
Related papers
- FACMIC: Federated Adaptative CLIP Model for Medical Image Classification [12.166024140377337]
We introduce a federated adaptive Contrastive Language Image Pretraining CLIP model for classification tasks.
We employ a light-weight and efficient feature attention module for CLIP that selects suitable features for each client's data.
We propose a domain adaptation technique to reduce differences in data distribution between clients.
arXiv Detail & Related papers (2024-10-08T13:24:10Z) - FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models [48.484485609995986]
Federated learning has enabled multiple parties to collaboratively train large language models without directly sharing their data (FedLLM)
There are currently no realistic datasets and benchmarks for FedLLM.
We propose FedLLM-Bench, which involves 8 training methods, 4 training datasets, and 6 evaluation metrics.
arXiv Detail & Related papers (2024-06-07T11:19:30Z) - Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data [10.64629029156029]
We introduce an innovative personalized Federated Learning framework, Multi-level Personalized Federated Learning (MuPFL)
MuPFL integrates three pivotal modules: Biased Activation Value Dropout (BAVD), Adaptive Cluster-based Model Update (ACMU) and Prior Knowledge-assisted Fine-tuning (PKCF)
Experiments on diverse real-world datasets show that MuPFL consistently outperforms state-of-the-art baselines, even under extreme non-i.i.d. and long-tail conditions.
arXiv Detail & Related papers (2024-05-10T11:52:53Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Collaborative Training of Medical Artificial Intelligence Models with
non-uniform Labels [0.07176066267895696]
Building powerful and robust deep learning models requires training with large multi-party datasets.
We propose flexible federated learning (FFL) for collaborative training on such data.
We demonstrate that having heterogeneously labeled datasets, FFL-based training leads to significant performance increase.
arXiv Detail & Related papers (2022-11-24T13:48:54Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Adaptive Personlization in Federated Learning for Highly Non-i.i.d. Data [37.667379000751325]
Federated learning (FL) is a distributed learning method that offers medical institutes the prospect of collaboration in a global model.
In this work, we investigate an adaptive hierarchical clustering method for FL to produce intermediate semi-global models.
Our experiments demonstrate significant performance gain in heterogeneous distribution compared to standard FL methods in classification accuracy.
arXiv Detail & Related papers (2022-07-07T17:25:04Z) - Federated Learning for the Classification of Tumor Infiltrating
Lymphocytes [5.881088147423591]
We evaluate the performance of federated learning (FL) in developing deep learning models for analysis of digitized tissue sections.
Deep learning classification model was trained using 50*50 square micron patches extracted from whole slide images.
arXiv Detail & Related papers (2022-03-30T19:10:50Z) - Model-Contrastive Federated Learning [92.9075661456444]
Federated learning enables multiple parties to collaboratively train a machine learning model without communicating their local data.
We propose MOON: model-contrastive federated learning.
Our experiments show that MOON significantly outperforms the other state-of-the-art federated learning algorithms on various image classification tasks.
arXiv Detail & Related papers (2021-03-30T11:16:57Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.