Modality Exchange Network for Retinogeniculate Visual Pathway
Segmentation
- URL: http://arxiv.org/abs/2401.01685v1
- Date: Wed, 3 Jan 2024 11:41:57 GMT
- Title: Modality Exchange Network for Retinogeniculate Visual Pathway
Segmentation
- Authors: Hua Han (1 and 2), Cheng Li (1), Lei Xie (3), Yuanjing Feng (3), Alou
Diakite (1 and 2), Shanshan Wang (1 and 4) ((1) Shenzhen Institute of
Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, (2)
University of Chinese Academy of Sciences, Beijing, China, (3) College of
Information Engineering, Zhejiang University of Technology, Hangzhou, China,
(4) Peng Cheng Laboratory, Shenzhen, China)
- Abstract summary: We propose a novel Modality Exchange Network (ME-Net) that effectively utilizes multi-modal magnetic resonance (MR) imaging information to enhance RGVP segmentation.
Specifically, we design a channel and spatially mixed attention module to exchange modality information between T1-weighted and fractional anisotropy MR images.
Experimental results demonstrate that our method outperforms existing state-of-the-art approaches in terms of RGVP segmentation performance.
- Score: 5.726588626363204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate segmentation of the retinogeniculate visual pathway (RGVP) aids in
the diagnosis and treatment of visual disorders by identifying disruptions or
abnormalities within the pathway. However, the complex anatomical structure and
connectivity of RGVP make it challenging to achieve accurate segmentation. In
this study, we propose a novel Modality Exchange Network (ME-Net) that
effectively utilizes multi-modal magnetic resonance (MR) imaging information to
enhance RGVP segmentation. Our ME-Net has two main contributions. Firstly, we
introduce an effective multi-modal soft-exchange technique. Specifically, we
design a channel and spatially mixed attention module to exchange modality
information between T1-weighted and fractional anisotropy MR images. Secondly,
we propose a cross-fusion module that further enhances the fusion of
information between the two modalities. Experimental results demonstrate that
our method outperforms existing state-of-the-art approaches in terms of RGVP
segmentation performance.
Related papers
- SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Two-stage MR Image Segmentation Method for Brain Tumors based on
Attention Mechanism [27.08977505280394]
A coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed.
The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module.
The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality.
arXiv Detail & Related papers (2023-04-17T08:34:41Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis [3.5718941645696485]
Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
arXiv Detail & Related papers (2022-03-07T12:54:33Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - A Multi-View Dynamic Fusion Framework: How to Improve the Multimodal
Brain Tumor Segmentation from Multi-Views? [5.793853101758628]
This paper proposes a multi-view dynamic fusion framework to improve the performance of brain tumor segmentation.
By evaluating the proposed framework on BRATS 2015 and BRATS 2018, it can be found that the fusion results from multi-views achieve a better performance than the segmentation result from the single view.
arXiv Detail & Related papers (2020-12-21T09:45:23Z) - Multi-Modality Pathology Segmentation Framework: Application to Cardiac
Magnetic Resonance Images [3.5354617056939874]
This work presents an automatic cascade pathology segmentation framework based on multi-modality CMR images.
It mainly consists of two neural networks: an anatomical structure segmentation network (ASSN) and a pathological region segmentation network (PRSN)
arXiv Detail & Related papers (2020-08-13T09:57:04Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.