An Optimization Framework for Processing and Transfer Learning for the
Brain Tumor Segmentation
- URL: http://arxiv.org/abs/2402.07008v1
- Date: Sat, 10 Feb 2024 18:03:15 GMT
- Title: An Optimization Framework for Processing and Transfer Learning for the
Brain Tumor Segmentation
- Authors: Tianyi Ren, Ethan Honey, Harshitha Rebala, Abhishek Sharma, Agamdeep
Chopra, Mehmet Kurt
- Abstract summary: We have constructed an optimization framework based on a 3D U-Net model for brain tumor segmentation.
This framework incorporates a range of techniques, including various pre-processing and post-processing techniques, and transfer learning.
On the validation datasets, this multi-modality brain tumor segmentation framework achieves an average lesion-wise Dice score of 0.79, 0.72, 0.74 on Challenges 1, 2, 3 respectively.
- Score: 2.0886519175557368
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tumor segmentation from multi-modal brain MRI images is a challenging task
due to the limited samples, high variance in shapes and uneven distribution of
tumor morphology. The performance of automated medical image segmentation has
been significant improvement by the recent advances in deep learning. However,
the model predictions have not yet reached the desired level for clinical use
in terms of accuracy and generalizability. In order to address the distinct
problems presented in Challenges 1, 2, and 3 of BraTS 2023, we have constructed
an optimization framework based on a 3D U-Net model for brain tumor
segmentation. This framework incorporates a range of techniques, including
various pre-processing and post-processing techniques, and transfer learning.
On the validation datasets, this multi-modality brain tumor segmentation
framework achieves an average lesion-wise Dice score of 0.79, 0.72, 0.74 on
Challenges 1, 2, 3 respectively.
Related papers
- MBDRes-U-Net: Multi-Scale Lightweight Brain Tumor Segmentation Network [0.0]
This study proposes the MBDRes-U-Net model using the three-dimensional (3D) U-Net framework, which integrates multibranch residual blocks and fused attention into the model.
The computational burden of the model is reduced by the branch strategy, which effectively uses the rich local features in multimodal images.
arXiv Detail & Related papers (2024-11-04T09:03:43Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Automated Ensemble-Based Segmentation of Adult Brain Tumors: A Novel
Approach Using the BraTS AFRICA Challenge Data [0.0]
We introduce an ensemble method that comprises eleven unique variations based on three core architectures.
Our findings reveal that the ensemble approach, combining different architectures, outperforms single models.
These results underline the potential of tailored deep learning techniques in precisely segmenting brain tumors.
arXiv Detail & Related papers (2023-08-14T15:34:22Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Triplet Contrastive Learning for Brain Tumor Classification [99.07846518148494]
We present a novel approach of directly learning deep embeddings for brain tumor types, which can be used for downstream tasks such as classification.
We evaluate our method on an extensive brain tumor dataset which consists of 27 different tumor classes, out of which 13 are defined as rare.
arXiv Detail & Related papers (2021-08-08T11:26:34Z) - HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation [17.756591105686]
This paper proposes hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block.
Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
arXiv Detail & Related papers (2020-12-12T09:09:04Z) - Brain tumour segmentation using cascaded 3D densely-connected U-net [10.667165962654996]
We propose a deep-learning based method to segment a brain tumour into its subregions.
The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture.
Experimental results on the BraTS20 validation dataset demonstrate that the proposed model achieved average Dice Scores of 0.90, 0.82, and 0.78 for whole tumour, tumour core and enhancing tumour respectively.
arXiv Detail & Related papers (2020-09-16T09:14:59Z) - Scale-Space Autoencoders for Unsupervised Anomaly Segmentation in Brain
MRI [47.26574993639482]
We show improved anomaly segmentation performance and the general capability to obtain much more crisp reconstructions of input data at native resolution.
The modeling of the laplacian pyramid further enables the delineation and aggregation of lesions at multiple scales.
arXiv Detail & Related papers (2020-06-23T09:20:42Z) - Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs [2.4736005621421686]
Multimodal brain tumor segmentation challenge (BraTS) brings together researchers to improve automated methods for 3D MRI brain tumor segmentation.
We evaluate the method on BraTS 2019 challenge.
arXiv Detail & Related papers (2020-01-06T07:47:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.