Fourier Disentangled Multimodal Prior Knowledge Fusion for Red Nucleus
Segmentation in Brain MRI
- URL: http://arxiv.org/abs/2211.01353v1
- Date: Wed, 2 Nov 2022 17:54:52 GMT
- Title: Fourier Disentangled Multimodal Prior Knowledge Fusion for Red Nucleus
Segmentation in Brain MRI
- Authors: Guanghui Fu, Gabriel Jimenez, Sophie Loizillon, Rosana El Jurdi, Lydia
Chougar, Didier Dormont, Romain Valabregue, Ninon Burgos, St\'ephane
Leh\'ericy, Daniel Racoceanu, Olivier Colliot, the ICEBERG Study Group
- Abstract summary: The red nucleus is a structure of the midbrain that plays an important role in parkinsonian disorders.
We propose a new model that integrates prior knowledge from different contrasts for red nucleus segmentation.
- Score: 1.8596805118803879
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Early and accurate diagnosis of parkinsonian syndromes is critical to provide
appropriate care to patients and for inclusion in therapeutic trials. The red
nucleus is a structure of the midbrain that plays an important role in these
disorders. It can be visualized using iron-sensitive magnetic resonance imaging
(MRI) sequences. Different iron-sensitive contrasts can be produced with MRI.
Combining such multimodal data has the potential to improve segmentation of the
red nucleus. Current multimodal segmentation algorithms are computationally
consuming, cannot deal with missing modalities and need annotations for all
modalities. In this paper, we propose a new model that integrates prior
knowledge from different contrasts for red nucleus segmentation. The method
consists of three main stages. First, it disentangles the image into high-level
information representing the brain structure, and low-frequency information
representing the contrast. The high-frequency information is then fed into a
network to learn anatomical features, while the list of multimodal
low-frequency information is processed by another module. Finally, feature
fusion is performed to complete the segmentation task. The proposed method was
used with several iron-sensitive contrasts (iMag, QSM, R2*, SWI). Experiments
demonstrate that our proposed model substantially outperforms a baseline UNet
model when the training set size is very small.
Related papers
- MindFormer: A Transformer Architecture for Multi-Subject Brain Decoding via fMRI [50.55024115943266]
We introduce a new Transformer architecture called MindFormer to generate fMRI-conditioned feature vectors.
MindFormer incorporates two key innovations: 1) a novel training strategy based on the IP-Adapter to extract semantically meaningful features from fMRI signals, and 2) a subject specific token and linear layer that effectively capture individual differences in fMRI signals.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Two-stage MR Image Segmentation Method for Brain Tumors based on
Attention Mechanism [27.08977505280394]
A coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed.
The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module.
The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality.
arXiv Detail & Related papers (2023-04-17T08:34:41Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - SMU-Net: Style matching U-Net for brain tumor segmentation with missing
modalities [4.855689194518905]
We propose a style matching U-Net (SMU-Net) for brain tumour segmentation on MRI images.
Our co-training approach utilizes a content and style-matching mechanism to distill the informative features from the full-modality network into a missing modality network.
Our style matching module adaptively recalibrates the representation space by learning a matching function to transfer the informative and textural features from a full-modality path into a missing-modality path.
arXiv Detail & Related papers (2022-04-06T17:55:19Z) - Latent Correlation Representation Learning for Brain Tumor Segmentation
with Missing MRI Modalities [2.867517731896504]
Accurately segmenting brain tumor from MR images is the key to clinical diagnostics and treatment planning.
It's common to miss some imaging modalities in clinical practice.
We present a novel brain tumor segmentation algorithm with missing modalities.
arXiv Detail & Related papers (2021-04-13T14:21:09Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Meta-modal Information Flow: A Method for Capturing Multimodal Modular
Disconnectivity in Schizophrenia [11.100316178148994]
We introduce a method that takes advantage of multimodal data in addressing the hypotheses of disconnectivity and dysfunction within schizophrenia (SZ)
We propose a modularity-based method that can be applied to the GGM to identify links that are associated with mental illness across a multimodal data set.
Through simulation and real data, we show our approach reveals important information about disease-related network disruptions that are missed with a focus on a single modality.
arXiv Detail & Related papers (2020-01-06T18:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.