ReFuSeg: Regularized Multi-Modal Fusion for Precise Brain Tumour
Segmentation
- URL: http://arxiv.org/abs/2308.13883v1
- Date: Sat, 26 Aug 2023 13:41:56 GMT
- Title: ReFuSeg: Regularized Multi-Modal Fusion for Precise Brain Tumour
Segmentation
- Authors: Aditya Kasliwal, Sankarshanaa Sagaram, Laven Srivastava, Pratinav
Seth, Adil Khan
- Abstract summary: This paper presents a novel multi-modal approach for brain lesion segmentation that leverages information from four distinct imaging modalities.
Our proposed regularization module makes it robust to these scenarios and ensures the reliability of lesion segmentation.
- Score: 5.967412944432766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation of brain tumours is a fundamental task in medical image
analysis that can help clinicians in diagnosing the patient and tracking the
progression of any malignant entities. Accurate segmentation of brain lesions
is essential for medical diagnosis and treatment planning. However, failure to
acquire specific MRI imaging modalities can prevent applications from operating
in critical situations, raising concerns about their reliability and overall
trustworthiness. This paper presents a novel multi-modal approach for brain
lesion segmentation that leverages information from four distinct imaging
modalities while being robust to real-world scenarios of missing modalities,
such as T1, T1c, T2, and FLAIR MRI of brains. Our proposed method can help
address the challenges posed by artifacts in medical imagery due to data
acquisition errors (such as patient motion) or a reconstruction algorithm's
inability to represent the anatomy while ensuring a trade-off in accuracy. Our
proposed regularization module makes it robust to these scenarios and ensures
the reliability of lesion segmentation.
Related papers
- Large Scale Supervised Pretraining For Traumatic Brain Injury Segmentation [1.1203032569015594]
segmentation of lesions in msTBI presents a significant challenge due to the diverse characteristics of these lesions.
AIMS-TBI Challenge 2024 aims to advance innovative segmentation algorithms specifically designed for T1-weighted MRI data.
We train a Resenc L network on a comprehensive collection of datasets covering various anatomical and pathological structures.
Following this, the model is fine-tuned on msTBI-specific data to optimize its performance for the unique characteristics of T1-weighted MRI scans.
arXiv Detail & Related papers (2025-04-09T09:52:45Z) - Simultaneous Tri-Modal Medical Image Fusion and Super-Resolution using Conditional Diffusion Model [2.507050016527729]
Tri-modal medical image fusion can provide a more comprehensive view of the disease's shape, location, and biological activity.
Due to the limitations of imaging equipment and considerations for patient safety, the quality of medical images is usually limited.
There is an urgent need for a technology that can both enhance image resolution and integrate multi-modal information.
arXiv Detail & Related papers (2024-04-26T12:13:41Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Learning from imperfect training data using a robust loss function:
application to brain image segmentation [0.0]
In brain MRI analysis, head segmentation is commonly used for measuring and visualizing the brain's anatomical structures.
Here we propose a deep learning framework that can segment brain, skull, and extra-cranial tissue using only T1-weighted MRI as input.
arXiv Detail & Related papers (2022-08-08T19:08:32Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Segmentation of 2D Brain MR Images [0.0]
The purpose of this project is to provide an automatic brain tumour segmentation method of MRI images.
Early diagnosis of brain tumours plays a crucial role in improving treatment possibilities and increases the survival rate of the patients.
arXiv Detail & Related papers (2021-11-05T10:23:09Z) - QuickTumorNet: Fast Automatic Multi-Class Segmentation of Brain Tumors [0.0]
Manual segmentation of brain tumors from 3D MRI volumes is a time-consuming task.
Our model, QuickTumorNet, demonstrated fast, reliable, and accurate brain tumor segmentation.
arXiv Detail & Related papers (2020-12-22T23:16:43Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.