Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks
- URL: http://arxiv.org/abs/2202.02000v1
- Date: Fri, 4 Feb 2022 07:10:00 GMT
- Title: Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks
- Authors: Wangbin Ding, Lei Li, Xiahai Zhuang, Liqin Huang
- Abstract summary: Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation.
Many conventional MAS methods employed the atlases from the same modality as the target image.
In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality.
- Score: 20.87045880678701
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-atlas segmentation (MAS) is a promising framework for medical image
segmentation. Generally, MAS methods register multiple atlases, i.e., medical
images with corresponding labels, to a target image; and the transformed atlas
labels can be combined to generate target segmentation via label fusion
schemes. Many conventional MAS methods employed the atlases from the same
modality as the target image. However, the number of atlases with the same
modality may be limited or even missing in many clinical applications. Besides,
conventional MAS methods suffer from the computational burden of registration
or label fusion procedures. In this work, we design a novel cross-modality MAS
framework, which uses available atlases from a certain modality to segment a
target image from another modality. To boost the computational efficiency of
the framework, both the image registration and label fusion are achieved by
well-designed deep neural networks. For the atlas-to-target image registration,
we propose a bi-directional registration network (BiRegNet), which can
efficiently align images from different modalities. For the label fusion, we
design a similarity estimation network (SimNet), which estimates the fusion
weight of each atlas by measuring its similarity to the target image. SimNet
can learn multi-scale information for similarity estimation to improve the
performance of label fusion. The proposed framework was evaluated by the left
ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets,
respectively. Results have shown that the framework is effective for
cross-modality MAS in both registration and label fusion. The code will be
released publicly on \url{https://github.com/NanYoMy/cmmas} once the manuscript
is accepted.
Related papers
- Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Heterogeneous Semantic Transfer for Multi-label Recognition with Partial Labels [70.45813147115126]
Multi-label image recognition with partial labels (MLR-PL) may greatly reduce the cost of annotation and thus facilitate large-scale MLR.
We find that strong semantic correlations exist within each image and across different images.
These correlations can help transfer the knowledge possessed by the known labels to retrieve the unknown labels.
arXiv Detail & Related papers (2022-05-23T08:37:38Z) - Graph Attention Transformer Network for Multi-Label Image Classification [50.0297353509294]
We propose a general framework for multi-label image classification that can effectively mine complex inter-label relationships.
Our proposed methods can achieve state-of-the-art performance on three datasets.
arXiv Detail & Related papers (2022-03-08T12:39:05Z) - Reference-guided Pseudo-Label Generation for Medical Semantic
Segmentation [25.76014072179711]
We propose a novel approach to generate supervision for semi-supervised semantic segmentation.
We use a small number of labeled images as reference material and match pixels in an unlabeled image to the semantics of the best fitting pixel in a reference set.
We achieve the same performance as a standard fully supervised model on X-ray anatomy segmentation, albeit 95% fewer labeled images.
arXiv Detail & Related papers (2021-12-01T12:21:24Z) - Factorisation-based Image Labelling [0.9319432628663639]
We propose a patched-based label propagation approach based on a generative model with latent variables.
We compare our proposed model against the state-of-the-art using data from the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labeling.
arXiv Detail & Related papers (2021-11-19T17:10:54Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Modeling the Probabilistic Distribution of Unlabeled Data forOne-shot
Medical Image Segmentation [40.41161371507547]
We develop a data augmentation method for one-shot brain magnetic resonance imaging (MRI) image segmentation.
Our method exploits only one labeled MRI image (named atlas) and a few unlabeled images.
Our method outperforms the state-of-the-art one-shot medical segmentation methods.
arXiv Detail & Related papers (2021-02-03T12:28:04Z) - Knowledge-Guided Multi-Label Few-Shot Learning for General Image
Recognition [75.44233392355711]
KGGR framework exploits prior knowledge of statistical label correlations with deep neural networks.
It first builds a structured knowledge graph to correlate different labels based on statistical label co-occurrence.
Then, it introduces the label semantics to guide learning semantic-specific features.
It exploits a graph propagation network to explore graph node interactions.
arXiv Detail & Related papers (2020-09-20T15:05:29Z) - Instance-Aware Graph Convolutional Network for Multi-Label
Classification [55.131166957803345]
Graph convolutional neural network (GCN) has effectively boosted the multi-label image recognition task.
We propose an instance-aware graph convolutional neural network (IA-GCN) framework for multi-label classification.
arXiv Detail & Related papers (2020-08-19T12:49:28Z) - Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks [20.87045880678701]
High-level structure information can provide reliable similarity measurement for cross-modality images.
This work presents a new MAS framework for cross-modality images, where both image registration and label fusion are achieved by deep neural networks (DNNs)
For image registration, we propose a consistent registration network, which can jointly estimate forward and backward dense displacement fields (DDFs)
For label fusion, we adapt a few-shot learning network to measure the similarity of atlas and target patches.
arXiv Detail & Related papers (2020-08-15T02:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.