RESECT-SEG: Open access annotations of intra-operative brain tumor
ultrasound images
- URL: http://arxiv.org/abs/2207.07494v1
- Date: Wed, 13 Jul 2022 05:53:30 GMT
- Title: RESECT-SEG: Open access annotations of intra-operative brain tumor
ultrasound images
- Authors: Bahareh Behboodi, Francois-Xavier Carton, Matthieu Chabanas, Sandrine
De Ribaupierre, Ole Solheim, Bodil K. R. Munkvold, Hassan Rivaz, Yiming Xiao,
Ingerid Reinertsen
- Abstract summary: The RESECT database consists of MR and US (iUS) images of 23 patients who underwent resection surgeries.
The proposed dataset contains tumor tissues and resection cavity annotations of the iUS images.
These labels could also be used to train deep learning approaches.
- Score: 2.8712862578745018
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Purpose: Registration and segmentation of magnetic resonance (MR) and
ultrasound (US) images play an essential role in surgical planning and
resection of brain tumors. However, validating these techniques is challenging
due to the scarcity of publicly accessible sources with high-quality ground
truth information. To this end, we propose a unique annotation dataset of tumor
tissues and resection cavities from the previously published RESECT dataset
(Xiao et al. 2017) to encourage a more rigorous assessments of image processing
techniques. Acquisition and validation methods: The RESECT database consists of
MR and intraoperative US (iUS) images of 23 patients who underwent resection
surgeries. The proposed dataset contains tumor tissues and resection cavity
annotations of the iUS images. The quality of annotations were validated by two
highly experienced neurosurgeons through several assessment criteria. Data
format and availability: Annotations of tumor tissues and resection cavities
are provided in 3D NIFTI formats. Both sets of annotations are accessible
online in the \url{https://osf.io/6y4db}. Discussion and potential
applications: The proposed database includes tumor tissue and resection cavity
annotations from real-world clinical ultrasound brain images to evaluate
segmentation and registration methods. These labels could also be used to train
deep learning approaches. Eventually, this dataset should further improve the
quality of image guidance in neurosurgery.
Related papers
- Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - Lumbar Spine Tumor Segmentation and Localization in T2 MRI Images Using AI [2.9746083684997418]
This study introduces a novel data augmentation technique, aimed at automating spine tumor segmentation and localization through AI approaches.
A Convolutional Neural Network (CNN) architecture is employed for tumor classification. 3D vertebral segmentation and labeling techniques are used to help pinpoint the exact location of the tumors in the lumbar spine.
Results indicate a remarkable performance, with 99% accuracy for tumor segmentation, 98% accuracy for tumor classification, and 99% accuracy for tumor localization achieved with the proposed approach.
arXiv Detail & Related papers (2024-05-07T05:55:50Z) - Brain Tumor Segmentation from MRI Images using Deep Learning Techniques [3.1498833540989413]
A public MRI dataset contains 3064 TI-weighted images from 233 patients with three variants of brain tumor, viz. meningioma, glioma, and pituitary tumor.
The dataset files were converted and preprocessed before indulging into the methodology which employs implementation and training of some well-known image segmentation deep learning models.
The experimental findings showed that among all the applied approaches, the recurrent residual U-Net which uses Adam reaches a Mean Intersection Over Union of 0.8665 and outperforms other compared state-of-the-art deep learning models.
arXiv Detail & Related papers (2023-04-29T13:33:21Z) - Brain tumor multi classification and segmentation in MRI images using
deep learning [3.1248717814228923]
The classification model is based on the EfficientNetB1 architecture and is trained to classify images into four classes: meningioma, glioma, pituitary adenoma, and no tumor.
The segmentation model is based on the U-Net architecture and is trained to accurately segment the tumor from the MRI images.
arXiv Detail & Related papers (2023-04-20T01:32:55Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Deep Learning models for benign and malign Ocular Tumor Growth
Estimation [3.1558405181807574]
Clinicians often face issues in selecting suitable image processing algorithm for medical imaging data.
A strategy for the selection of a proper model is presented here.
arXiv Detail & Related papers (2021-07-09T05:40:25Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - ESTAN: Enhanced Small Tumor-Aware Network for Breast Ultrasound Image
Segmentation [0.0]
We propose a novel deep neural network architecture, namely Enhanced Small Tumor-Aware Network (ESTAN) to accurately segment breast tumors.
ESTAN introduces two encoders to extract and fuse image context information at different scales and utilizes row-column-wise kernels in the encoder to adapt to breast anatomy.
arXiv Detail & Related papers (2020-09-27T16:42:59Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.