Bilateral Hippocampi Segmentation in Low Field MRIs Using Mutual Feature Learning via Dual-Views
- URL: http://arxiv.org/abs/2410.17502v1
- Date: Wed, 23 Oct 2024 02:00:07 GMT
- Title: Bilateral Hippocampi Segmentation in Low Field MRIs Using Mutual Feature Learning via Dual-Views
- Authors: Himashi Peiris, Zhaolin Chen,
- Abstract summary: Low-field MRIs are more accessible and cost-effective, which eliminates the need for sedation in children.
We present a novel deep-learning approach for the automatic segmentation of bilateral hippocampi in low-field MRIs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate hippocampus segmentation in brain MRI is critical for studying cognitive and memory functions and diagnosing neurodevelopmental disorders. While high-field MRIs provide detailed imaging, low-field MRIs are more accessible and cost-effective, which eliminates the need for sedation in children, though they often suffer from lower image quality. In this paper, we present a novel deep-learning approach for the automatic segmentation of bilateral hippocampi in low-field MRIs. Extending recent advancements in infant brain segmentation to underserved communities through the use of low-field MRIs ensures broader access to essential diagnostic tools, thereby supporting better healthcare outcomes for all children. Inspired by our previous work, Co-BioNet, the proposed model employs a dual-view structure to enable mutual feature learning via high-frequency masking, enhancing segmentation accuracy by leveraging complementary information from different perspectives. Extensive experiments demonstrate that our method provides reliable segmentation outcomes for hippocampal analysis in low-resource settings. The code is publicly available at: https://github.com/himashi92/LoFiHippSeg.
Related papers
- Dual Attention Residual U-Net for Accurate Brain Ultrasound Segmentation in IVH Detection [5.77500692308611]
Intraventricular hemorrhage (IVH) is a severe neurological complication among premature infants.<n>Recent deep learning methods offer promise for computer-aided diagnosis.<n>We propose an enhanced Residual U-Net architecture incorporating two complementary attention mechanisms.
arXiv Detail & Related papers (2025-05-23T09:53:57Z) - 4D Multimodal Co-attention Fusion Network with Latent Contrastive Alignment for Alzheimer's Diagnosis [24.771496672135395]
We propose M2M-AlignNet: a geometry-aware co-attention network with latent alignment for early Alzheimer's diagnosis.
At the core of our approach is a multi-patch-to-multi-patch (M2M) contrastive loss function that quantifies and reduces representational discrepancies.
We conduct extensive experiments to confirm the effectiveness of our method and highlight the correspondance between fMRI and sMRI as AD biomarkers.
arXiv Detail & Related papers (2025-04-23T15:18:55Z) - Brain Tumor Detection in MRI Based on Federated Learning with YOLOv11 [0.0]
Current machine learning approaches have two major limitations, data privacy and high latency.
We propose a federated learning architecture for a better accurate brain tumor detection incorporating the YOLOv11 algorithm.
arXiv Detail & Related papers (2025-03-06T04:50:07Z) - BrainMVP: Multi-modal Vision Pre-training for Brain Image Analysis using Multi-parametric MRI [11.569448567735435]
BrainMVP is a multi-modal vision pre-training framework for brain image analysis using multi-parametric MRI scans.
Cross-modal reconstruction is explored to learn distinctive brain image embeddings and efficient modality fusion capabilities.
Experiments on downstream tasks demonstrate superior performance compared to state-of-the-art pre-training methods in the medical domain.
arXiv Detail & Related papers (2024-10-14T15:12:16Z) - Multi-Modality Conditioned Variational U-Net for Field-of-View Extension in Brain Diffusion MRI [10.096809077954095]
An incomplete field-of-view (FOV) in diffusion magnetic resonance imaging (dMRI) can severely hinder the volumetric and bundle analyses of whole-brain white matter connectivity.
We propose a novel framework for imputing dMRI scans in the incomplete part of the FOV by integrating the learned diffusion features in the acquired part of the FOV to the complete brain anatomical structure.
arXiv Detail & Related papers (2024-09-20T18:41:29Z) - An Interpretable Cross-Attentive Multi-modal MRI Fusion Framework for Schizophrenia Diagnosis [46.58592655409785]
We propose a novel Cross-Attentive Multi-modal Fusion framework (CAMF) to capture both intra-modal and inter-modal relationships between fMRI and sMRI.
Our approach significantly improves classification accuracy, as demonstrated by our evaluations on two extensive multi-modal brain imaging datasets.
The gradient-guided Score-CAM is applied to interpret critical functional networks and brain regions involved in schizophrenia.
arXiv Detail & Related papers (2024-03-29T20:32:30Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Analyzing Deep Learning Based Brain Tumor Segmentation with Missing MRI
Modalities [6.840531823670822]
Approaches evaluated include the Adversarial Co-training Network (ACN) and a combination of mmGAN and DeepMedic.
Using the BraTS2018 dataset, this work demonstrates that the state-of-the-art ACN performs better especially when T1c is missing.
While a simple combination of mmGAN and DeepMedic also shows strong potentials when only one MRI modality is missing.
arXiv Detail & Related papers (2022-08-06T08:41:57Z) - Data and Physics Driven Learning Models for Fast MRI -- Fundamentals and
Methodologies from CNN, GAN to Attention and Transformers [72.047680167969]
This article aims to introduce the deep learning based data driven techniques for fast MRI including convolutional neural network and generative adversarial network based methods.
We will detail the research in coupling physics and data driven models for MRI acceleration.
Finally, we will demonstrate through a few clinical applications, explain the importance of data harmonisation and explainable models for such fast MRI techniques in multicentre and multi-scanner studies.
arXiv Detail & Related papers (2022-04-01T22:48:08Z) - Edge-Enhanced Dual Discriminator Generative Adversarial Network for Fast
MRI with Parallel Imaging Using Multi-view Information [10.616409735438756]
We introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction.
One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information.
Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information.
arXiv Detail & Related papers (2021-12-10T10:49:26Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - 4D Deep Learning for Multiple Sclerosis Lesion Activity Segmentation [49.32653090178743]
We investigate whether extending this problem to full 4D deep learning using a history of MRI volumes can improve performance.
We find that our proposed architecture outperforms previous approaches with a lesion-wise true positive rate of 0.84 at a lesion-wise false positive rate of 0.19.
arXiv Detail & Related papers (2020-04-20T11:41:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.