Brain Tumor Segmentation (BraTS) Challenge 2024: Meningioma Radiotherapy Planning Automated Segmentation
- URL: http://arxiv.org/abs/2405.18383v2
- Date: Thu, 15 Aug 2024 19:04:26 GMT
- Title: Brain Tumor Segmentation (BraTS) Challenge 2024: Meningioma Radiotherapy Planning Automated Segmentation
- Authors: Dominic LaBella, Katherine Schumacher, Michael Mix, Kevin Leu, Shan McBurney-Lin, Pierre Nedelec, Javier Villanueva-Meyer, Jonathan Shapey, Tom Vercauteren, Kazumi Chia, Omar Al-Salihi, Justin Leu, Lia Halasz, Yury Velichko, Chunhao Wang, John Kirkpatrick, Scott Floyd, Zachary J. Reitman, Trey Mullikin, Ulas Bagci, Sean Sachdev, Jona A. Hattangadi-Gluth, Tyler Seibert, Nikdokht Farid, Connor Puett, Matthew W. Pease, Kevin Shiue, Syed Muhammad Anwar, Shahriar Faghani, Muhammad Ammar Haider, Pranav Warman, Jake Albrecht, AndrĂ¡s Jakab, Mana Moassefi, Verena Chung, Alejandro Aristizabal, Alexandros Karargyris, Hasan Kassem, Sarthak Pati, Micah Sheller, Christina Huang, Aaron Coley, Siddharth Ghanta, Alex Schneider, Conrad Sharp, Rachit Saluja, Florian Kofler, Philipp Lohmann, Phillipp Vollmuth, Louis Gagnon, Maruf Adewole, Hongwei Bran Li, Anahita Fathi Kazerooni, Nourel Hoda Tahon, Udunna Anazodo, Ahmed W. Moawad, Bjoern Menze, Marius George Linguraru, Mariam Aboian, Benedikt Wiestler, Ujjwal Baid, Gian-Marco Conte, Andreas M. Rauschecker, Ayman Nada, Aly H. Abayazeed, Raymond Huang, Maria Correia de Verdier, Jeffrey D. Rudie, Spyridon Bakas, Evan Calabrese,
- Abstract summary: The BraTS-MEN-RT challenge aims to advance automated segmentation algorithms using the largest known multi-institutional dataset of radiotherapy planning brain MRIs.
Each case includes a defaced 3D post-contrast T1-weighted radiotherapy planning MRI in its native acquisition space.
Target volume annotations adhere to established radiotherapy planning protocols.
- Score: 47.119513326344126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The 2024 Brain Tumor Segmentation Meningioma Radiotherapy (BraTS-MEN-RT) challenge aims to advance automated segmentation algorithms using the largest known multi-institutional dataset of radiotherapy planning brain MRIs with expert-annotated target labels for patients with intact or postoperative meningioma that underwent either conventional external beam radiotherapy or stereotactic radiosurgery. Each case includes a defaced 3D post-contrast T1-weighted radiotherapy planning MRI in its native acquisition space, accompanied by a single-label "target volume" representing the gross tumor volume (GTV) and any at-risk postoperative site. Target volume annotations adhere to established radiotherapy planning protocols, ensuring consistency across cases and institutions. For preoperative meningiomas, the target volume encompasses the entire GTV and associated nodular dural tail, while for postoperative cases, it includes at-risk resection cavity margins as determined by the treating institution. Case annotations were reviewed and approved by expert neuroradiologists and radiation oncologists. Participating teams will develop, containerize, and evaluate automated segmentation models using this comprehensive dataset. Model performance will be assessed using an adapted lesion-wise Dice Similarity Coefficient and the 95% Hausdorff distance. The top-performing teams will be recognized at the Medical Image Computing and Computer Assisted Intervention Conference in October 2024. BraTS-MEN-RT is expected to significantly advance automated radiotherapy planning by enabling precise tumor segmentation and facilitating tailored treatment, ultimately improving patient outcomes.
Related papers
- The 2024 Brain Tumor Segmentation (BraTS) Challenge: Glioma Segmentation on Post-treatment MRI [5.725734864357991]
The 2024 Brain Tumor (BraTS) challenge on post-treatment glioma MRI will provide a community standard and benchmark for state-of-the-art automated segmentation models.
Challenge competitors will develop automated segmentation models to predict four distinct tumor sub-regions.
Models will be evaluated on separate validation and test datasets.
arXiv Detail & Related papers (2024-05-28T17:07:55Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023:
Intracranial Meningioma [4.435336201147607]
The BraTS meningioma 2023 challenge will provide a community standard and benchmark for state-of-the-art automated intracranial meningioma segmentation models.
Challenge competitors will develop automated segmentation models to predict three distinct meningioma sub-regions on MRI.
arXiv Detail & Related papers (2023-05-12T17:52:36Z) - Segmentation of glioblastomas in early post-operative multi-modal MRI
with deep neural networks [33.51490233427579]
Two state-of-the-art neural network architectures for pre-operative segmentation were trained for the task.
The best performance achieved was a 61% Dice score, and the best classification performance was about 80% balanced accuracy.
The predicted segmentations can be used to accurately classify the patients into those with residual tumor, and those with gross total resection.
arXiv Detail & Related papers (2023-04-18T10:14:45Z) - Segmentation of Planning Target Volume in CT Series for Total Marrow
Irradiation Using U-Net [0.0]
We present a deep learning-based auto-contouring method for segmenting Planning Target Volume (PTV) for TMLI treatment using the U-Net architecture.
Our findings are a preliminary but significant step towards developing a segmentation model that has the potential to save radiation oncologists a considerable amount of time.
arXiv Detail & Related papers (2023-04-05T10:40:37Z) - Exploiting segmentation labels and representation learning to forecast
therapy response of PDAC patients [60.78505216352878]
We propose a hybrid deep neural network pipeline to predict tumour response to initial chemotherapy.
We leverage a combination of representation transfer from segmentation to classification, as well as localisation and representation learning.
Our approach yields a remarkably data-efficient method able to predict treatment response with a ROC-AUC of 63.7% using only 477 datasets in total.
arXiv Detail & Related papers (2022-11-08T11:50:31Z) - Integrative Imaging Informatics for Cancer Research: Workflow Automation
for Neuro-oncology (I3CR-WANO) [0.12175619840081271]
We propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-Oncology MRI data.
Our end-to-end framework i) classifies MRI sequences using an ensemble classifier, ii) preprocesses the data in a reproducible manner, and iv) delineates tumor tissue subtypes.
It is robust to missing sequences and adopts an expert-in-the-loop approach, where the segmentation results may be manually refined by radiologists.
arXiv Detail & Related papers (2022-10-06T18:23:42Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.