Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks
- URL: http://arxiv.org/abs/2008.08946v1
- Date: Sat, 15 Aug 2020 02:57:23 GMT
- Title: Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks
- Authors: Wangbin Ding, Lei Li, Xiahai Zhuang, Liqin Huang
- Abstract summary: High-level structure information can provide reliable similarity measurement for cross-modality images.
This work presents a new MAS framework for cross-modality images, where both image registration and label fusion are achieved by deep neural networks (DNNs)
For image registration, we propose a consistent registration network, which can jointly estimate forward and backward dense displacement fields (DDFs)
For label fusion, we adapt a few-shot learning network to measure the similarity of atlas and target patches.
- Score: 20.87045880678701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Both image registration and label fusion in the multi-atlas segmentation
(MAS) rely on the intensity similarity between target and atlas images.
However, such similarity can be problematic when target and atlas images are
acquired using different imaging protocols. High-level structure information
can provide reliable similarity measurement for cross-modality images when
cooperating with deep neural networks (DNNs). This work presents a new MAS
framework for cross-modality images, where both image registration and label
fusion are achieved by DNNs. For image registration, we propose a consistent
registration network, which can jointly estimate forward and backward dense
displacement fields (DDFs). Additionally, an invertible constraint is employed
in the network to reduce the correspondence ambiguity of the estimated DDFs.
For label fusion, we adapt a few-shot learning network to measure the
similarity of atlas and target patches. Moreover, the network can be seamlessly
integrated into the patch-based label fusion. The proposed framework is
evaluated on the MM-WHS dataset of MICCAI 2017. Results show that the framework
is effective in both cross-modality registration and segmentation.
Related papers
- One registration is worth two segmentations [12.163299991979574]
The goal of image registration is to establish spatial correspondence between two or more images.
We propose an alternative but more intuitive correspondence representation: a set of corresponding regions-of-interest (ROI) pairs.
We experimentally show that the proposed SAMReg is capable of segmenting and matching multiple ROI pairs.
arXiv Detail & Related papers (2024-05-17T16:14:32Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - Matching in the Wild: Learning Anatomical Embeddings for Multi-Modality
Images [28.221419419614183]
Radiotherapists require accurate registration of MR/CT images to effectively use information from both modalities.
Recent learning-based methods have shown promising results in the rigid/affine step.
We propose a new approach called Cross-SAM to enable cross-modality matching.
arXiv Detail & Related papers (2023-07-07T11:49:06Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks [20.87045880678701]
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation.
Many conventional MAS methods employed the atlases from the same modality as the target image.
In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality.
arXiv Detail & Related papers (2022-02-04T07:10:00Z) - Similarity-Aware Fusion Network for 3D Semantic Segmentation [87.51314162700315]
We propose a similarity-aware fusion network (SAFNet) to adaptively fuse 2D images and 3D point clouds for 3D semantic segmentation.
We employ a late fusion strategy where we first learn the geometric and contextual similarities between the input and back-projected (from 2D pixels) point clouds.
We show that SAFNet significantly outperforms existing state-of-the-art fusion-based approaches across various data integrity.
arXiv Detail & Related papers (2021-07-04T09:28:18Z) - Instance-Aware Graph Convolutional Network for Multi-Label
Classification [55.131166957803345]
Graph convolutional neural network (GCN) has effectively boosted the multi-label image recognition task.
We propose an instance-aware graph convolutional neural network (IA-GCN) framework for multi-label classification.
arXiv Detail & Related papers (2020-08-19T12:49:28Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - MvMM-RegNet: A new image registration framework based on multivariate
mixture model and neural network estimation [14.36896617430302]
We propose a new image registration framework based on generative model (MvMM) and neural network estimation.
A generative model consolidating both appearance and anatomical information is established to derive a novel loss function capable of implementing groupwise registration.
We highlight the versatility of the proposed framework for various applications on multimodal cardiac images.
arXiv Detail & Related papers (2020-06-28T11:19:15Z) - CoMIR: Contrastive Multimodal Image Representation for Registration [4.543268895439618]
We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations)
CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures.
arXiv Detail & Related papers (2020-06-11T10:51:33Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.