MSE-Nets: Multi-annotated Semi-supervised Ensemble Networks for
Improving Segmentation of Medical Image with Ambiguous Boundaries
- URL: http://arxiv.org/abs/2311.10380v1
- Date: Fri, 17 Nov 2023 08:14:24 GMT
- Title: MSE-Nets: Multi-annotated Semi-supervised Ensemble Networks for
Improving Segmentation of Medical Image with Ambiguous Boundaries
- Authors: Shuai Wang, Tengjin Weng, Jingyi Wang, Yang Shen, Zhidong Zhao, Yixiu
Liu, Pengfei Jiao, Zhiming Cheng, Yaqi Wang
- Abstract summary: We propose Multi-annotated Semi-supervised Ensemble Networks (MSE-Nets) for learning medical image segmentation from limited multi-annotated data.
We introduce the Network Pairwise Consistency Enhancement (NPCE) module and Multi-Network PseudoSupervised (MNPS) module to enhance MSE-Nets for the segmentation task.
Experiments on the ISIC dataset show that we reduced the demand for multi-annotated data by 97.75% and narrowed the gap with the best fully-supervised baseline to just a Jaccard index of 4%.
- Score: 21.513613620213754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation annotations exhibit variations among experts due
to the ambiguous boundaries of segmented objects and backgrounds in medical
images. Although using multiple annotations for each image in the
fully-supervised has been extensively studied for training deep models,
obtaining a large amount of multi-annotated data is challenging due to the
substantial time and manpower costs required for segmentation annotations,
resulting in most images lacking any annotations. To address this, we propose
Multi-annotated Semi-supervised Ensemble Networks (MSE-Nets) for learning
segmentation from limited multi-annotated and abundant unannotated data.
Specifically, we introduce the Network Pairwise Consistency Enhancement (NPCE)
module and Multi-Network Pseudo Supervised (MNPS) module to enhance MSE-Nets
for the segmentation task by considering two major factors: (1) to optimize the
utilization of all accessible multi-annotated data, the NPCE separates
(dis)agreement annotations of multi-annotated data at the pixel level and
handles agreement and disagreement annotations in different ways, (2) to
mitigate the introduction of imprecise pseudo-labels, the MNPS extends the
training data by leveraging consistent pseudo-labels from unannotated data.
Finally, we improve confidence calibration by averaging the predictions of base
networks. Experiments on the ISIC dataset show that we reduced the demand for
multi-annotated data by 97.75\% and narrowed the gap with the best
fully-supervised baseline to just a Jaccard index of 4\%. Furthermore, compared
to other semi-supervised methods that rely only on a single annotation or a
combined fusion approach, the comprehensive experimental results on ISIC and
RIGA datasets demonstrate the superior performance of our proposed method in
medical image segmentation with ambiguous boundaries.
Related papers
- Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention [1.1155836879100416]
We propose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical image segmentation.
MFMSA block refines the process of spatial feature extraction, particularly in capturing boundary features.
E-SDM mitigates information loss in multi-task learning with deep supervision.
arXiv Detail & Related papers (2024-05-10T07:34:36Z) - AIMS: All-Inclusive Multi-Level Segmentation [93.5041381700744]
We propose a new task, All-Inclusive Multi-Level (AIMS), which segments visual regions into three levels: part, entity, and relation.
We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation.
arXiv Detail & Related papers (2023-05-28T16:28:49Z) - Learning Self-Supervised Low-Rank Network for Single-Stage Weakly and
Semi-Supervised Semantic Segmentation [119.009033745244]
This paper presents a Self-supervised Low-Rank Network ( SLRNet) for single-stage weakly supervised semantic segmentation (WSSS) and semi-supervised semantic segmentation (SSSS)
SLRNet uses cross-view self-supervision, that is, it simultaneously predicts several attentive LR representations from different views of an image to learn precise pseudo-labels.
Experiments on the Pascal VOC 2012, COCO, and L2ID datasets demonstrate that our SLRNet outperforms both state-of-the-art WSSS and SSSS methods with a variety of different settings.
arXiv Detail & Related papers (2022-03-19T09:19:55Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Learning from Partially Overlapping Labels: Image Segmentation under
Annotation Shift [68.6874404805223]
We propose several strategies for learning from partially overlapping labels in the context of abdominal organ segmentation.
We find that combining a semi-supervised approach with an adaptive cross entropy loss can successfully exploit heterogeneously annotated data.
arXiv Detail & Related papers (2021-07-13T09:22:24Z) - Hierarchical Self-Supervised Learning for Medical Image Segmentation
Based on Multi-Domain Data Aggregation [23.616336382437275]
We propose Hierarchical Self-Supervised Learning (HSSL) for medical image segmentation.
We first aggregate a dataset from several medical challenges, then pre-train the network in a self-supervised manner, and finally fine-tune on labeled data.
Compared to learning from scratch, our new method yields better performance on various tasks.
arXiv Detail & Related papers (2021-07-10T18:17:57Z) - Boosting Semi-supervised Image Segmentation with Global and Local Mutual
Information Regularization [9.994508738317585]
We present a novel semi-supervised segmentation method that leverages mutual information (MI) on categorical distributions.
We evaluate the method on three challenging publicly-available datasets for medical image segmentation.
arXiv Detail & Related papers (2021-03-08T15:13:25Z) - D-LEMA: Deep Learning Ensembles from Multiple Annotations -- Application
to Skin Lesion Segmentation [14.266037264648533]
Leveraging a collection of annotators' opinions for an image is an interesting way of estimating a gold standard.
We propose an approach to handle annotators' disagreements when training a deep model.
arXiv Detail & Related papers (2020-12-14T01:51:22Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.