MammoFL: Mammographic Breast Density Estimation using Federated Learning
- URL: http://arxiv.org/abs/2206.05575v5
- Date: Thu, 14 Dec 2023 04:29:18 GMT
- Title: MammoFL: Mammographic Breast Density Estimation using Federated Learning
- Authors: Ramya Muthukrishnan, Angelina Heyler, Keshava Katti, Sarthak Pati,
Walter Mankowski, Aprupa Alahari, Michael Sanborn, Emily F. Conant,
Christopher Scott, Stacey Winham, Celine Vachon, Pratik Chaudhari, Despina
Kontos, Spyridon Bakas
- Abstract summary: We automate quantitative mammographic breast density estimation with neural networks.
We show that this tool is a strong use case for federated learning on multi-institutional datasets.
- Score: 12.005028432197708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we automate quantitative mammographic breast density
estimation with neural networks and show that this tool is a strong use case
for federated learning on multi-institutional datasets. Our dataset included
bilateral CC-view and MLO-view mammographic images from two separate
institutions. Two U-Nets were separately trained on algorithm-generated labels
to perform segmentation of the breast and dense tissue from these images and
subsequently calculate breast percent density (PD). The networks were trained
with federated learning and compared to three non-federated baselines, one
trained on each single-institution dataset and one trained on the aggregated
multi-institution dataset. We demonstrate that training on multi-institution
datasets is critical to algorithm generalizability. We further show that
federated learning on multi-institutional datasets improves model
generalization to unseen data at nearly the same level as centralized training
on multi-institutional datasets, indicating that federated learning can be
applied to our method to improve algorithm generalizability while maintaining
patient privacy.
Related papers
- Multi-Modal One-Shot Federated Ensemble Learning for Medical Data with Vision Large Language Model [27.299068494473016]
We introduce FedMME, an innovative one-shot multi-modal federated ensemble learning framework.
FedMME capitalizes on vision large language models to produce textual reports from medical images.
It surpasses existing one-shot federated learning approaches by more than 17.5% in accuracy on the RSNA dataset.
arXiv Detail & Related papers (2025-01-06T08:36:28Z) - Federated brain tumor segmentation: an extensive benchmark [2.515027627030043]
We propose an extensive benchmark of federated learning algorithms from all three classes on this task.
We show that some methods from each category can bring a slight performance improvement and potentially limit the final model(s) bias toward the predominant data distribution of the federation.
arXiv Detail & Related papers (2024-10-07T09:32:19Z) - Multi-OCT-SelfNet: Integrating Self-Supervised Learning with Multi-Source Data Fusion for Enhanced Multi-Class Retinal Disease Classification [2.5091334993691206]
Development of a robust deep-learning model for retinal disease diagnosis requires a substantial dataset for training.
The capacity to generalize effectively on smaller datasets remains a persistent challenge.
We've combined a wide range of data sources to improve performance and generalization to new data.
arXiv Detail & Related papers (2024-09-17T17:22:35Z) - Dcl-Net: Dual Contrastive Learning Network for Semi-Supervised
Multi-Organ Segmentation [12.798684146496754]
We propose a two-stage Dual Contrastive Learning Network for semi-supervised MoS.
In Stage 1, we develop a similarity-guided global contrastive learning to explore the implicit continuity and similarity among images.
In Stage 2, we present an organ-aware local contrastive learning to further attract the class representations.
arXiv Detail & Related papers (2024-03-06T07:39:33Z) - FedMM: Federated Multi-Modal Learning with Modality Heterogeneity in
Computational Pathology [3.802258033231335]
Federated Multi-Modal (FedMM) is a learning framework that trains multiple single-modal feature extractors to enhance subsequent classification performance.
FedMM notably outperforms two baselines in accuracy and AUC metrics.
arXiv Detail & Related papers (2024-02-24T16:58:42Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Multimodal Clustering Networks for Self-supervised Learning from
Unlabeled Videos [69.61522804742427]
This paper proposes a self-supervised training framework that learns a common multimodal embedding space.
We extend the concept of instance-level contrastive learning with a multimodal clustering step to capture semantic similarities across modalities.
The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains.
arXiv Detail & Related papers (2021-04-26T15:55:01Z) - Automated Pancreas Segmentation Using Multi-institutional Collaborative
Deep Learning [9.727026678755678]
We study the use of federated learning between two institutions in a real-world setting to collaboratively train a model.
We quantitatively compare the segmentation models obtained with federated learning and local training alone.
Our experimental results show that federated learning models have higher generalizability than standalone training.
arXiv Detail & Related papers (2020-09-28T08:54:10Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.