Lung-Originated Tumor Segmentation from Computed Tomography Scan (LOTUS)
Benchmark
- URL: http://arxiv.org/abs/2201.00458v1
- Date: Mon, 3 Jan 2022 03:06:38 GMT
- Title: Lung-Originated Tumor Segmentation from Computed Tomography Scan (LOTUS)
Benchmark
- Authors: Parnian Afshar, Arash Mohammadi, Konstantinos N. Plataniotis, Keyvan
Farahani, Justin Kirby, Anastasia Oikonomou, Amir Asif, Leonard Wee, Andre
Dekker, Xin Wu, Mohammad Ariful Haque, Shahruk Hossain, Md. Kamrul Hasan,
Uday Kamal, Winston Hsu, Jhih-Yuan Lin, M. Sohel Rahman, Nabil Ibtehaz, Sh.
M. Amir Foisol, Kin-Man Lam, Zhong Guang, Runze Zhang, Sumohana S.
Channappayya, Shashank Gupta, Chander Dev
- Abstract summary: Lung cancer is one of the deadliest cancers, and its effective diagnosis and treatment depend on the accurate delineation of the tumor.
Human-centered segmentation, which is currently the most common approach, is subject to inter-observer variability.
The 2018 VIP Cup started with a global engagement from 42 countries to access the competition data.
In a nutshell, all the algorithms proposed during the competition, are based on deep learning models combined with a false positive reduction technique.
- Score: 48.30502612686276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lung cancer is one of the deadliest cancers, and in part its effective
diagnosis and treatment depend on the accurate delineation of the tumor.
Human-centered segmentation, which is currently the most common approach, is
subject to inter-observer variability, and is also time-consuming, considering
the fact that only experts are capable of providing annotations. Automatic and
semi-automatic tumor segmentation methods have recently shown promising
results. However, as different researchers have validated their algorithms
using various datasets and performance metrics, reliably evaluating these
methods is still an open challenge. The goal of the Lung-Originated Tumor
Segmentation from Computed Tomography Scan (LOTUS) Benchmark created through
2018 IEEE Video and Image Processing (VIP) Cup competition, is to provide a
unique dataset and pre-defined metrics, so that different researchers can
develop and evaluate their methods in a unified fashion. The 2018 VIP Cup
started with a global engagement from 42 countries to access the competition
data. At the registration stage, there were 129 members clustered into 28 teams
from 10 countries, out of which 9 teams made it to the final stage and 6 teams
successfully completed all the required tasks. In a nutshell, all the
algorithms proposed during the competition, are based on deep learning models
combined with a false positive reduction technique. Methods developed by the
three finalists show promising results in tumor segmentation, however, more
effort should be put into reducing the false positive rate. This competition
manuscript presents an overview of the VIP-Cup challenge, along with the
proposed algorithms and results.
Related papers
- Automatic Organ and Pan-cancer Segmentation in Abdomen CT: the FLARE 2023 Challenge [15.649976310277099]
Organ and cancer segmentation in abdomen Computed Tomography (CT) scans is the prerequisite for precise cancer diagnosis and treatment.
Most existing benchmarks and algorithms are tailored to specific cancer types, limiting their ability to provide comprehensive cancer analysis.
This work presents the first international competition on abdominal organ and pan-cancer segmentation by providing a large-scale and diverse dataset.
arXiv Detail & Related papers (2024-08-22T16:38:45Z) - Multi-task Explainable Skin Lesion Classification [54.76511683427566]
We propose a few-shot-based approach for skin lesions that generalizes well with few labelled data.
The proposed approach comprises a fusion of a segmentation network that acts as an attention module and classification network.
arXiv Detail & Related papers (2023-10-11T05:49:47Z) - Towards Tumour Graph Learning for Survival Prediction in Head & Neck
Cancer Patients [0.0]
Nearly one million new cases of head & neck cancer diagnosed worldwide in 2020.
automated segmentation and prognosis estimation approaches can help ensure each patient gets the most effective treatment.
This paper presents a framework to perform these functions on arbitrary field of view (FoV) PET and CT registered scans.
arXiv Detail & Related papers (2023-04-17T09:32:06Z) - WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic
Segmentation for Lung Adenocarcinoma [51.50991881342181]
This challenge includes 10,091 patch-level annotations and over 130 million labeled pixels.
First place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919)
arXiv Detail & Related papers (2022-04-13T15:27:05Z) - Extending nn-UNet for brain tumor segmentation [1.218340575383456]
This paper describes our contribution to the 2021 brain tumor segmentation competition.
We developed our methods based on nn-UNet, the winning entry of last year competition.
The proposed models won first place in the final ranking on unseen test data.
arXiv Detail & Related papers (2021-12-09T01:51:52Z) - Redundancy Reduction in Semantic Segmentation of 3D Brain Tumor MRIs [2.946960157989204]
This work is a modification of network training process that minimizes redundancy under perturbations.
We evaluated the method on BraTS 2021 validation board, and achieved 0.8600, 0.8868 and 0.9265 average dice for enhanced tumor core, tumor core and whole tumor.
Our team (NVAUTO) submission was the top performing in terms of ET and TC scores and within top 10 performing teams in terms of WT scores.
arXiv Detail & Related papers (2021-11-01T07:39:06Z) - Colorectal Cancer Segmentation using Atrous Convolution and Residual
Enhanced UNet [0.5353034688884528]
We propose a CNN-based approach, which uses atrous convolutions and residual connections besides the conventional filters.
The proposed AtResUNet was trained on the DigestPath 2019 Challenge dataset for colorectal cancer segmentation with results having a Dice Coefficient of 0.748.
arXiv Detail & Related papers (2021-03-16T19:20:20Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.