Automated Pancreas Segmentation Using Multi-institutional Collaborative
Deep Learning
- URL: http://arxiv.org/abs/2009.13148v1
- Date: Mon, 28 Sep 2020 08:54:10 GMT
- Title: Automated Pancreas Segmentation Using Multi-institutional Collaborative
Deep Learning
- Authors: Pochuan Wang, Chen Shen, Holger R. Roth, Dong Yang, Daguang Xu,
Masahiro Oda, Kazunari Misawa, Po-Ting Chen, Kao-Lang Liu, Wei-Chih Liao,
Weichung Wang, Kensaku Mori
- Abstract summary: We study the use of federated learning between two institutions in a real-world setting to collaboratively train a model.
We quantitatively compare the segmentation models obtained with federated learning and local training alone.
Our experimental results show that federated learning models have higher generalizability than standalone training.
- Score: 9.727026678755678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of deep learning-based methods strongly relies on the number
of datasets used for training. Many efforts have been made to increase the data
in the medical image analysis field. However, unlike photography images, it is
hard to generate centralized databases to collect medical images because of
numerous technical, legal, and privacy issues. In this work, we study the use
of federated learning between two institutions in a real-world setting to
collaboratively train a model without sharing the raw data across national
boundaries. We quantitatively compare the segmentation models obtained with
federated learning and local training alone. Our experimental results show that
federated learning models have higher generalizability than standalone
training.
Related papers
- Federated Learning for Medical Image Classification: A Comprehensive Benchmark [19.725507209432198]
We conduct a comprehensive evaluation of several state-of-the-art federated learning algorithms in the context of medical imaging.
No single algorithm consistently delivers optimal performance across all medical federated learning scenarios.
Our code will be released on GitHub, offering a reliable and comprehensive benchmark for future federated learning studies in medical imaging.
arXiv Detail & Related papers (2025-04-07T16:22:18Z) - Multi-Modal One-Shot Federated Ensemble Learning for Medical Data with Vision Large Language Model [27.299068494473016]
We introduce FedMME, an innovative one-shot multi-modal federated ensemble learning framework.
FedMME capitalizes on vision large language models to produce textual reports from medical images.
It surpasses existing one-shot federated learning approaches by more than 17.5% in accuracy on the RSNA dataset.
arXiv Detail & Related papers (2025-01-06T08:36:28Z) - Coupling AI and Citizen Science in Creation of Enhanced Training Dataset for Medical Image Segmentation [3.7274206780843477]
We introduce a robust and versatile framework that combines AI and crowdsourcing to improve the quality and quantity of medical image datasets.
Our approach utilise a user-friendly online platform that enables a diverse group of crowd annotators to label medical images efficiently.
We employ pix2pixGAN, a generative AI model, to expand the training dataset with synthetic images that capture realistic morphological features.
arXiv Detail & Related papers (2024-09-04T21:22:54Z) - Split Learning for Distributed Collaborative Training of Deep Learning
Models in Health Informatics [20.72616921953282]
We show how split learning can enable collaborative training of deep learning models across disparate and privately maintained health datasets.
We introduce a new privacy-preserving distributed learning framework that offers a higher level of privacy compared to conventional federated learning.
arXiv Detail & Related papers (2023-08-21T20:30:51Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Collaborative Training of Medical Artificial Intelligence Models with
non-uniform Labels [0.07176066267895696]
Building powerful and robust deep learning models requires training with large multi-party datasets.
We propose flexible federated learning (FFL) for collaborative training on such data.
We demonstrate that having heterogeneously labeled datasets, FFL-based training leads to significant performance increase.
arXiv Detail & Related papers (2022-11-24T13:48:54Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Compound Figure Separation of Biomedical Images: Mining Large Datasets
for Self-supervised Learning [12.445324044675116]
We introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations.
We also propose a new side loss that is optimized for compound figure separation.
This is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation.
arXiv Detail & Related papers (2022-08-30T16:02:34Z) - MammoFL: Mammographic Breast Density Estimation using Federated Learning [12.005028432197708]
We automate quantitative mammographic breast density estimation with neural networks.
We show that this tool is a strong use case for federated learning on multi-institutional datasets.
arXiv Detail & Related papers (2022-06-11T17:38:09Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.