Learning self-calibrated optic disc and cup segmentation from
multi-rater annotations
- URL: http://arxiv.org/abs/2206.05092v2
- Date: Tue, 14 Jun 2022 06:32:11 GMT
- Title: Learning self-calibrated optic disc and cup segmentation from
multi-rater annotations
- Authors: Junde Wu and Huihui Fang and Fangxin Shang and Zhaowei Wang and Dalu
Yang and Wenshuo Zhou and Yehui Yang and Yanwu Xu
- Abstract summary: We propose a novel neural network framework to learn OD/OC segmentation from multi-rater annotations.
The proposed method can realize a mutual improvement of both tasks and finally obtain a refined segmentation result.
- Score: 7.104669952770345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The segmentation of optic disc(OD) and optic cup(OC) from fundus images is an
important fundamental task for glaucoma diagnosis. In the clinical practice, it
is often necessary to collect opinions from multiple experts to obtain the
final OD/OC annotation. This clinical routine helps to mitigate the individual
bias. But when data is multiply annotated, standard deep learning models will
be inapplicable. In this paper, we propose a novel neural network framework to
learn OD/OC segmentation from multi-rater annotations. The segmentation results
are self-calibrated through the iterative optimization of multi-rater
expertness estimation and calibrated OD/OC segmentation. In this way, the
proposed method can realize a mutual improvement of both tasks and finally
obtain a refined segmentation result. Specifically, we propose Diverging
Model(DivM) and Converging Model(ConM) to process the two tasks respectively.
ConM segments the raw image based on the multi-rater expertness map provided by
DivM. DivM generates multi-rater expertness map from the segmentation mask
provided by ConM. The experiment results show that by recurrently running ConM
and DivM, the results can be self-calibrated so as to outperform a range of
state-of-the-art(SOTA) multi-rater segmentation methods.
Related papers
- MM-UNet: A Mixed MLP Architecture for Improved Ophthalmic Image Segmentation [3.2846676620336632]
Ophthalmic image segmentation serves as a critical foundation for ocular disease diagnosis.
Transformer-based models address these limitations but introduce substantial computational overhead.
We introduce MM-UNet, an efficient Mixed model tailored for ophthalmic image segmentation.
arXiv Detail & Related papers (2024-08-16T08:34:50Z) - Annotator Consensus Prediction for Medical Image Segmentation with
Diffusion Models [70.3497683558609]
A major challenge in the segmentation of medical images is the large inter- and intra-observer variability in annotations provided by multiple experts.
We propose a novel method for multi-expert prediction using diffusion models.
arXiv Detail & Related papers (2023-06-15T10:01:05Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Multi-rater Prism: Learning self-calibrated medical image segmentation
from multiple raters [22.837498603928097]
We propose a novel neural network framework, called Multi-Rater Prism (MrPrism) to learn the medical image segmentation from multiple labels.
In this paper, we propose Converging Prism (ConP) and Diverging Prism (DivP) to process the two tasks iteratively.
The experimental results show that by recurrently running ConP and DivP, the two tasks can achieve mutual improvement.
arXiv Detail & Related papers (2022-12-01T15:52:15Z) - Semi-Supervised and Self-Supervised Collaborative Learning for Prostate
3D MR Image Segmentation [8.527048567343234]
Volumetric magnetic resonance (MR) image segmentation plays an important role in many clinical applications.
Deep learning (DL) has recently achieved state-of-the-art or even human-level performance on various image segmentation tasks.
In this work, we aim to train a semi-supervised and self-supervised collaborative learning framework for prostate 3D MR image segmentation.
arXiv Detail & Related papers (2022-11-16T11:40:13Z) - CUTS: A Deep Learning and Topological Framework for Multigranular Unsupervised Medical Image Segmentation [8.307551496968156]
We present CUTS, an unsupervised deep learning framework for medical image segmentation.
For each image, it produces an embedding map via intra-image contrastive learning and local patch reconstruction.
CUTS yields a series of coarse-to-fine-grained segmentations that highlight features at various granularities.
arXiv Detail & Related papers (2022-09-23T01:09:06Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.