Robust Medical Instrument Segmentation Challenge 2019
- URL: http://arxiv.org/abs/2003.10299v2
- Date: Tue, 19 May 2020 12:27:18 GMT
- Title: Robust Medical Instrument Segmentation Challenge 2019
- Authors: Tobias Ross, Annika Reinke, Peter M. Full, Martin Wagner, Hannes
Kenngott, Martin Apitz, Hellena Hempe, Diana Mindroc Filimon, Patrick Scholz,
Thuy Nuong Tran, Pierangela Bruno, Pablo Arbel\'aez, Gui-Bin Bian, Sebastian
Bodenstedt, Jon Lindstr\"om Bolmgren, Laura Bravo-S\'anchez, Hua-Bin Chen,
Cristina Gonz\'alez, Dong Guo, P{\aa}l Halvorsen, Pheng-Ann Heng, Enes
Hosgor, Zeng-Guang Hou, Fabian Isensee, Debesh Jha, Tingting Jiang, Yueming
Jin, Kadir Kirtac, Sabrina Kletz, Stefan Leger, Zhixuan Li, Klaus H.
Maier-Hein, Zhen-Liang Ni, Michael A. Riegler, Klaus Schoeffmann, Ruohua Shi,
Stefanie Speidel, Michael Stenzel, Isabell Twick, Gutai Wang, Jiacheng Wang,
Liansheng Wang, Lu Wang, Yujie Zhang, Yan-Jie Zhou, Lei Zhu, Manuel
Wiesenfarth, Annette Kopp-Schneider, Beat P. M\"uller-Stich, Lena Maier-Hein
- Abstract summary: Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
- Score: 56.148440125599905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intraoperative tracking of laparoscopic instruments is often a prerequisite
for computer and robotic-assisted interventions. While numerous methods for
detecting, segmenting and tracking of medical instruments based on endoscopic
video images have been proposed in the literature, key limitations remain to be
addressed: Firstly, robustness, that is, the reliable performance of
state-of-the-art methods when run on challenging images (e.g. in the presence
of blood, smoke or motion artifacts). Secondly, generalization; algorithms
trained for a specific intervention in a specific hospital should generalize to
other interventions or institutions.
In an effort to promote solutions for these limitations, we organized the
Robust Medical Instrument Segmentation (ROBUST-MIS) challenge as an
international benchmarking competition with a specific focus on the robustness
and generalization capabilities of algorithms. For the first time in the field
of endoscopic image processing, our challenge included a task on binary
segmentation and also addressed multi-instance detection and segmentation. The
challenge was based on a surgical data set comprising 10,040 annotated images
acquired from a total of 30 surgical procedures from three different types of
surgery. The validation of the competing methods for the three tasks (binary
segmentation, multi-instance detection and multi-instance segmentation) was
performed in three different stages with an increasing domain gap between the
training and the test data. The results confirm the initial hypothesis, namely
that algorithm performance degrades with an increasing domain gap. While the
average detection and segmentation quality of the best-performing algorithms is
high, future research should concentrate on detection and segmentation of
small, crossing, moving and transparent instrument(s) (parts).
Related papers
- QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - SAR-RARP50: Segmentation of surgical instrumentation and Action
Recognition on Robot-Assisted Radical Prostatectomy Challenge [72.97934765570069]
We release the first multimodal, publicly available, in-vivo, dataset for surgical action recognition and semantic instrumentation segmentation, containing 50 suturing video segments of Robotic Assisted Radical Prostatectomy (RARP)
The aim of the challenge is to enable researchers to leverage the scale of the provided dataset and develop robust and highly accurate single-task action recognition and tool segmentation approaches in the surgical domain.
A total of 12 teams participated in the challenge, contributing 7 action recognition methods, 9 instrument segmentation techniques, and 4 multitask approaches that integrated both action recognition and instrument segmentation.
arXiv Detail & Related papers (2023-12-31T13:32:18Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - TraSeTR: Track-to-Segment Transformer with Contrastive Query for
Instance-level Instrument Segmentation in Robotic Surgery [60.439434751619736]
We propose TraSeTR, a Track-to-Segment Transformer that exploits tracking cues to assist surgical instrument segmentation.
TraSeTR jointly reasons about the instrument type, location, and identity with instance-level predictions.
The effectiveness of our method is demonstrated with state-of-the-art instrument type segmentation results on three public datasets.
arXiv Detail & Related papers (2022-02-17T05:52:18Z) - The Medical Segmentation Decathlon [37.44481677534694]
State-of-the-art image segmentation algorithms are mature, accurate, and generalize well when retrained on unseen tasks.
A consistent good performance on a set of tasks preserved their good average performance on a different set of previously unseen tasks.
The training of accurate AI segmentation models is now commoditized to non AI experts.
arXiv Detail & Related papers (2021-06-10T13:34:06Z) - Deep Learning in Medical Ultrasound Image Segmentation: a Review [9.992387025633805]
It can be a key step to provide a reliable basis for clinical diagnosis, such as 3D reconstruction of human tissues.
Deep learning-based methods for ultrasound image segmentation are categorized into six main groups according to their architectures and training.
In the end, the challenges and potential research directions for medical ultrasound image segmentation are discussed.
arXiv Detail & Related papers (2020-02-18T16:33:22Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.