SegViz: A Federated Learning Framework for Medical Image Segmentation
from Distributed Datasets with Different and Incomplete Annotations
- URL: http://arxiv.org/abs/2301.07074v1
- Date: Tue, 17 Jan 2023 18:36:57 GMT
- Title: SegViz: A Federated Learning Framework for Medical Image Segmentation
from Distributed Datasets with Different and Incomplete Annotations
- Authors: Adway U. Kanhere, Pranav Kulkarni, Paul H. Yi, Vishwa S. Parekh
- Abstract summary: We developed SegViz, a learning framework for aggregating knowledge from distributed medical image segmentation datasets.
SegViz was trained to build a model capable of segmenting both liver and spleen aggregating knowledge from both these nodes.
Our results demonstrate SegViz as an essential first step towards training clinically translatable multi-task segmentation models.
- Score: 3.6704226968275258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmentation is one of the primary tasks in the application of deep learning
in medical imaging, owing to its multiple downstream clinical applications. As
a result, many large-scale segmentation datasets have been curated and released
for the segmentation of different anatomical structures. However, these
datasets focus on the segmentation of a subset of anatomical structures in the
body, therefore, training a model for each dataset would potentially result in
hundreds of models and thus limit their clinical translational utility.
Furthermore, many of these datasets share the same field of view but have
different subsets of annotations, thus making individual dataset annotations
incomplete. To that end, we developed SegViz, a federated learning framework
for aggregating knowledge from distributed medical image segmentation datasets
with different and incomplete annotations into a `global` meta-model. The
SegViz framework was trained to build a single model capable of segmenting both
liver and spleen aggregating knowledge from both these nodes by aggregating the
weights after every 10 epochs. The global SegViz model was tested on an
external dataset, Beyond the Cranial Vault (BTCV), comprising both liver and
spleen annotations using the dice similarity (DS) metric. The baseline
individual segmentation models for spleen and liver trained on their respective
datasets produced a DS score of 0.834 and 0.878 on the BTCV test set. In
comparison, the SegViz model produced comparable mean DS scores of 0.829 and
0.899 for the segmentation of the spleen and liver respectively. Our results
demonstrate SegViz as an essential first step towards training clinically
translatable multi-task segmentation models from distributed datasets with
disjoint incomplete annotations with excellent performance.
Related papers
- TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - One model to use them all: Training a segmentation model with complementary datasets [38.73145509617609]
We propose a method to combine partially annotated datasets, which provide complementary annotations, into one model.
Our approach successfully combines 6 classes into one model, increasing the overall Dice Score by 4.4%.
By including information on multiple classes, we were able to reduce confusion between stomach and colon by 24%.
arXiv Detail & Related papers (2024-02-29T16:46:49Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image
Segmentation [17.69933345468061]
scarcity has become a major obstacle for training powerful deep-learning models for medical image segmentation.
We introduce a textbfVersatile textbfSemi-supervised framework to exploit more unlabeled data for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2023-11-20T11:35:52Z) - CEmb-SAM: Segment Anything Model with Condition Embedding for Joint
Learning from Heterogeneous Datasets [3.894987097246834]
We consider the problem of jointly learning from heterogeneous datasets.
We merge the heterogeneous datasets into one dataset and refer to each component dataset as a subgroup.
Experiments show that Cemb-SAM outperforms the baseline methods on ultrasound image segmentation for peripheral nerves and breast cancer.
arXiv Detail & Related papers (2023-08-14T06:22:49Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Correlation-Aware Mutual Learning for Semi-supervised Medical Image
Segmentation [5.045813144375637]
Most existing semi-supervised segmentation methods only focus on extracting information from unlabeled data.
We propose a novel Correlation Aware Mutual Learning framework that leverages labeled data to guide the extraction of information from unlabeled data.
Our approach is based on a mutual learning strategy that incorporates two modules: the Cross-sample Mutual Attention Module (CMA) and the Omni-Correlation Consistency Module (OCC)
arXiv Detail & Related papers (2023-07-12T17:20:05Z) - Universal Segmentation of 33 Anatomies [19.194539991903593]
We present an approach for learning a single model that universally segments 33 anatomical structures.
We learn such a model from a union of multiple datasets, with each dataset containing the images that are partially labeled.
We evaluate our model on multiple open-source datasets, proving that our model has a good generalization performance.
arXiv Detail & Related papers (2022-03-04T02:29:54Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - MSeg: A Composite Dataset for Multi-domain Semantic Segmentation [100.17755160696939]
We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains.
We reconcile the generalization and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images.
A model trained on MSeg ranks first on the WildDash-v1 leaderboard for robust semantic segmentation, with no exposure to WildDash data during training.
arXiv Detail & Related papers (2021-12-27T16:16:35Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.