3D Medical Multi-modal Segmentation Network Guided by Multi-source
Correlation Constraint
- URL: http://arxiv.org/abs/2102.03111v1
- Date: Fri, 5 Feb 2021 11:23:12 GMT
- Title: 3D Medical Multi-modal Segmentation Network Guided by Multi-source
Correlation Constraint
- Authors: Tongxue Zhou, St\'ephane Canu, Pierre Vera and Su Ruan
- Abstract summary: We propose a multi-modality segmentation network with a correlation constraint.
Our experiment results tested on BraTS-2018 dataset for brain tumor segmentation demonstrate the effectiveness of our proposed method.
- Score: 2.867517731896504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of multimodal segmentation, the correlation between different
modalities can be considered for improving the segmentation results. In this
paper, we propose a multi-modality segmentation network with a correlation
constraint. Our network includes N model-independent encoding paths with N
image sources, a correlation constraint block, a feature fusion block, and a
decoding path. The model independent encoding path can capture
modality-specific features from the N modalities. Since there exists a strong
correlation between different modalities, we first propose a linear correlation
block to learn the correlation between modalities, then a loss function is used
to guide the network to learn the correlated features based on the linear
correlation block. This block forces the network to learn the latent correlated
features which are more relevant for segmentation. Considering that not all the
features extracted from the encoders are useful for segmentation, we propose to
use dual attention based fusion block to recalibrate the features along the
modality and spatial paths, which can suppress less informative features and
emphasize the useful ones. The fused feature representation is finally
projected by the decoder to obtain the segmentation result. Our experiment
results tested on BraTS-2018 dataset for brain tumor segmentation demonstrate
the effectiveness of our proposed method.
Related papers
- DiffVein: A Unified Diffusion Network for Finger Vein Segmentation and
Authentication [50.017055360261665]
We introduce DiffVein, a unified diffusion model-based framework which simultaneously addresses vein segmentation and authentication tasks.
For better feature interaction between these two branches, we introduce two specialized modules.
In this way, our framework allows for a dynamic interplay between diffusion and segmentation embeddings.
arXiv Detail & Related papers (2024-02-03T06:49:42Z) - Narrowing the semantic gaps in U-Net with learnable skip connections:
The case of medical image segmentation [12.812992773512871]
We propose a new segmentation framework, named UDTransNet, to solve three semantic gaps in U-Net.
Specifically, we propose a Dual Attention Transformer ( DAT) module for capturing the channel- and spatial-wise relationships, and a Decoder-guided Recalibration Attention (DRA) module for effectively connecting the DAT tokens and the decoder features.
Our UDTransNet produces higher evaluation scores and finer segmentation results with relatively fewer parameters over the state-of-the-art segmentation methods on different public datasets.
arXiv Detail & Related papers (2023-12-23T07:39:42Z) - Object Segmentation by Mining Cross-Modal Semantics [68.88086621181628]
We propose a novel approach by mining the Cross-Modal Semantics to guide the fusion and decoding of multimodal features.
Specifically, we propose a novel network, termed XMSNet, consisting of (1) all-round attentive fusion (AF), (2) coarse-to-fine decoder (CFD), and (3) cross-layer self-supervision.
arXiv Detail & Related papers (2023-05-17T14:30:11Z) - Discriminative Co-Saliency and Background Mining Transformer for
Co-Salient Object Detection [111.04994415248736]
We propose a Discriminative co-saliency and background Mining Transformer framework (DMT)
We use two types of pre-defined tokens to mine co-saliency and background information via our proposed contrast-induced pixel-to-token correlation and co-saliency token-to-token correlation modules.
Experimental results on three benchmark datasets demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-04-30T15:56:47Z) - FECANet: Boosting Few-Shot Semantic Segmentation with Feature-Enhanced
Context-Aware Network [48.912196729711624]
Few-shot semantic segmentation is the task of learning to locate each pixel of a novel class in a query image with only a few annotated support images.
We propose a Feature-Enhanced Context-Aware Network (FECANet) to suppress the matching noise caused by inter-class local similarity.
In addition, we propose a novel correlation reconstruction module that encodes extra correspondence relations between foreground and background and multi-scale context semantic features.
arXiv Detail & Related papers (2023-01-19T16:31:13Z) - UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation [93.88170217725805]
We propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed.
The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features.
Our evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-12-08T18:59:57Z) - A Tri-attention Fusion Guided Multi-modal Segmentation Network [2.867517731896504]
We propose a multi-modality segmentation network guided by a novel tri-attention fusion.
Our network includes N model-independent encoding paths with N image sources, a tri-attention fusion block, a dual-attention fusion block, and a decoding path.
Our experiment results tested on BraTS 2018 dataset for brain tumor segmentation demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-11-02T14:36:53Z) - Adaptive feature recombination and recalibration for semantic
segmentation with Fully Convolutional Networks [57.64866581615309]
We propose recombination of features and a spatially adaptive recalibration block that is adapted for semantic segmentation with Fully Convolutional Networks.
Results indicate that Recombination and Recalibration improve the results of a competitive baseline, and generalize across three different problems.
arXiv Detail & Related papers (2020-06-19T15:45:03Z) - Brain tumor segmentation with missing modalities via latent multi-source
correlation representation [6.060020806741279]
A novel correlation representation block is proposed to specially discover the latent multi-source correlation.
Thanks to the obtained correlation representation, the segmentation becomes more robust in the case of missing modalities.
We evaluate our model on BraTS 2018 datasets, it outperforms the current state-of-the-art method and produces robust results when one or more modalities are missing.
arXiv Detail & Related papers (2020-03-19T15:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.