Heidelberg Colorectal Data Set for Surgical Data Science in the Sensor
Operating Room
- URL: http://arxiv.org/abs/2005.03501v5
- Date: Tue, 23 Feb 2021 14:32:49 GMT
- Title: Heidelberg Colorectal Data Set for Surgical Data Science in the Sensor
Operating Room
- Authors: Lena Maier-Hein, Martin Wagner, Tobias Ross, Annika Reinke, Sebastian
Bodenstedt, Peter M. Full, Hellena Hempe, Diana Mindroc-Filimon, Patrick
Scholz, Thuy Nuong Tran, Pierangela Bruno, Anna Kisilenko, Benjamin M\"uller,
Tornike Davitashvili, Manuela Capek, Minu Tizabi, Matthias Eisenmann, Tim J.
Adler, Janek Gr\"ohl, Melanie Schellenberg, Silvia Seidlitz, T. Y. Emmy Lai,
B\"unyamin Pekdemir, Veith Roethlingshoefer, Fabian Both, Sebastian Bittel,
Marc Mengler, Lars M\"undermann, Martin Apitz, Annette Kopp-Schneider,
Stefanie Speidel, Hannes G. Kenngott, Beat P. M\"uller-Stich
- Abstract summary: This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms.
Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery.
- Score: 1.6276355161958829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-based tracking of medical instruments is an integral part of surgical
data science applications. Previous research has addressed the tasks of
detecting, segmenting and tracking medical instruments based on laparoscopic
video data. However, the proposed methods still tend to fail when applied to
challenging images and do not generalize well to data they have not been
trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set -
the first publicly available data set enabling comprehensive benchmarking of
medical instrument detection and segmentation algorithms with a specific
emphasis on method robustness and generalization capabilities. Our data set
comprises 30 laparoscopic videos and corresponding sensor data from medical
devices in the operating room for three different types of laparoscopic
surgery. Annotations include surgical phase labels for all video frames as well
as information on instrument presence and corresponding instance-wise
segmentation masks for surgical instruments (if any) in more than 10,000
individual frames. The data has successfully been used to organize
international competitions within the Endoscopic Vision Challenges 2017 and
2019.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning [65.54680361074882]
Eye-gaze Guided Multi-modal Alignment (EGMA) framework harnesses eye-gaze data for better alignment of medical visual and textual features.
We conduct downstream tasks of image classification and image-text retrieval on four medical datasets.
arXiv Detail & Related papers (2024-03-19T03:59:14Z) - CholecTrack20: A Dataset for Multi-Class Multiple Tool Tracking in
Laparoscopic Surgery [1.8076340162131013]
CholecTrack20 is an extensive dataset meticulously annotated for multi-class multi-tool tracking across three perspectives.
The dataset comprises 20 laparoscopic videos with over 35,000 frames and 65,000 annotated tool instances.
arXiv Detail & Related papers (2023-12-12T15:18:15Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - EndoMapper dataset of complete calibrated endoscopy procedures [8.577980383972005]
This paper introduces the Endomapper dataset, the first collection of complete endoscopy sequences acquired during regular medical practice.
Data will be used to build a 3D mapping and localization systems that can perform special task like, for example, detect blind zones during exploration.
arXiv Detail & Related papers (2022-04-29T17:10:01Z) - ERS: a novel comprehensive endoscopy image dataset for machine learning,
compliant with the MST 3.0 specification [0.0]
The article presents a new multi-label comprehensive image dataset from flexible endoscopy, colonoscopy and capsule endoscopy, named ERS.
The dataset contains around 6000 precisely and 115,000 approximately labeled frames from endoscopy videos, precise and 22,600 approximate segmentation masks, and 1.23 million unlabeled frames from flexible and capsule endoscopy videos.
arXiv Detail & Related papers (2022-01-21T15:39:45Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Co-Generation and Segmentation for Generalized Surgical Instrument
Segmentation on Unlabelled Data [49.419268399590045]
Surgical instrument segmentation for robot-assisted surgery is needed for accurate instrument tracking and augmented reality overlays.
Deep learning-based methods have shown state-of-the-art performance for surgical instrument segmentation, but their results depend on labelled data.
In this paper, we demonstrate the limited generalizability of these methods on different datasets, including human robot-assisted surgeries.
arXiv Detail & Related papers (2021-03-16T18:41:18Z) - m2caiSeg: Semantic Segmentation of Laparoscopic Images using
Convolutional Neural Networks [4.926395463398194]
We propose a deep learning based semantic segmentation algorithm to identify and label the tissues and organs in the endoscopic video feed of the human torso region.
We present an annotated dataset, m2caiSeg, created from endoscopic video feeds of real-world surgical procedures.
arXiv Detail & Related papers (2020-08-23T23:30:15Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.