Lung tumor segmentation in MRI mice scans using 3D nnU-Net with minimum annotations
- URL: http://arxiv.org/abs/2411.00922v2
- Date: Fri, 08 Nov 2024 17:23:05 GMT
- Title: Lung tumor segmentation in MRI mice scans using 3D nnU-Net with minimum annotations
- Authors: Piotr Kaniewski, Fariba Yousefi, Yeman Brhane Hagos, Talha Qaiser, Nikolay Burlutskiy,
- Abstract summary: In drug discovery, accurate lung tumor segmentation is an important step for assessing tumor size and its progression.
In this work, we focus on optimizing lung tumor segmentation in mice.
- Score: 1.5495593104596397
- License:
- Abstract: In drug discovery, accurate lung tumor segmentation is an important step for assessing tumor size and its progression using \textit{in-vivo} imaging such as MRI. While deep learning models have been developed to automate this process, the focus has predominantly been on human subjects, neglecting the pivotal role of animal models in pre-clinical drug development. In this work, we focus on optimizing lung tumor segmentation in mice. First, we demonstrate that the nnU-Net model outperforms the U-Net, U-Net3+, and DeepMeta models. Most importantly, we achieve better results with nnU-Net 3D models than 2D models, indicating the importance of spatial context for segmentation tasks in MRI mice scans. This study demonstrates the importance of 3D input over 2D input images for lung tumor segmentation in MRI scans. Finally, we outperform the prior state-of-the-art approach that involves the combined segmentation of lungs and tumors within the lungs. Our work achieves comparable results using only lung tumor annotations requiring fewer annotations, saving time and annotation efforts. This work (https://anonymous.4open.science/r/lung-tumour-mice-mri-64BB) is an important step in automating pre-clinical animal studies to quantify the efficacy of experimental drugs, particularly in assessing tumor changes.
Related papers
- MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients
with Advanced Nasopharyngeal Carcinoma using Pretreatment PET/CT [15.386240118882569]
Nasopharyngeal Carcinoma (NPC) is a worldwide malignant epithelial cancer.
Deep learning has been introduced to the survival prediction in various cancers including NPC.
In this study, we introduced the concept of multi-task leaning into deep survival models to address the overfitting problem resulted from small data.
arXiv Detail & Related papers (2021-09-16T04:12:59Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - Classification of Brain Tumours in MR Images using Deep Spatiospatial
Models [0.0]
This paper uses twotemporal models, ResNet (2+1)D and ResNet Mixed Convolution, to classify different types of brain tumours.
It was observed that both these models performed superior to the pure 3D convolutional model, ResNet18.
arXiv Detail & Related papers (2021-05-28T19:27:51Z) - Does anatomical contextual information improve 3D U-Net based brain
tumor segmentation? [0.0]
It is investigated whether the addition of contextual information from the brain anatomy improves U-Net-based brain tumor segmentation.
The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject.
arXiv Detail & Related papers (2020-10-26T09:57:58Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z) - Weakly Supervised PET Tumor Detection Using Class Response [3.947298454012977]
We present a novel approach to locate different type of lesions in positron emission tomography (PET) images using only a class label at the image-level.
The advantage of our proposed method consists of detecting the whole tumor volume in 3D images, using only two 2D images of PET image, and showing a very promising results.
arXiv Detail & Related papers (2020-03-18T17:06:08Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.