CholecTrack20: A Dataset for Multi-Class Multiple Tool Tracking in
Laparoscopic Surgery
- URL: http://arxiv.org/abs/2312.07352v1
- Date: Tue, 12 Dec 2023 15:18:15 GMT
- Title: CholecTrack20: A Dataset for Multi-Class Multiple Tool Tracking in
Laparoscopic Surgery
- Authors: Chinedu Innocent Nwoye, Kareem Elgohary, Anvita Srinivas, Fauzan Zaid,
Jo\"el L. Lavanchy, Nicolas Padoy
- Abstract summary: CholecTrack20 is an extensive dataset meticulously annotated for multi-class multi-tool tracking across three perspectives.
The dataset comprises 20 laparoscopic videos with over 35,000 frames and 65,000 annotated tool instances.
- Score: 1.8076340162131013
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Tool tracking in surgical videos is vital in computer-assisted intervention
for tasks like surgeon skill assessment, safety zone estimation, and
human-machine collaboration during minimally invasive procedures. The lack of
large-scale datasets hampers Artificial Intelligence implementation in this
domain. Current datasets exhibit overly generic tracking formalization, often
lacking surgical context: a deficiency that becomes evident when tools move out
of the camera's scope, resulting in rigid trajectories that hinder realistic
surgical representation. This paper addresses the need for a more precise and
adaptable tracking formalization tailored to the intricacies of endoscopic
procedures by introducing CholecTrack20, an extensive dataset meticulously
annotated for multi-class multi-tool tracking across three perspectives
representing the various ways of considering the temporal duration of a tool
trajectory: (1) intraoperative, (2) intracorporeal, and (3) visibility within
the camera's scope. The dataset comprises 20 laparoscopic videos with over
35,000 frames and 65,000 annotated tool instances with details on spatial
location, category, identity, operator, phase, and surgical visual conditions.
This detailed dataset caters to the evolving assistive requirements within a
procedure.
Related papers
- Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom [9.41936397281689]
Improved surgical skill is generally associated with improved patient outcomes, but assessment is subjective and labour-intensive.
A new public dataset is introduced, focusing on simulated surgery, using the nasal phase of endoscopic pituitary surgery as an exemplar.
A Multilayer Perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the "ratio of total procedure time to instrument visible time" correlated with higher surgical skill.
arXiv Detail & Related papers (2024-09-25T15:27:44Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Visual-Kinematics Graph Learning for Procedure-agnostic Instrument Tip
Segmentation in Robotic Surgeries [29.201385352740555]
We propose a novel visual-kinematics graph learning framework to accurately segment the instrument tip given various surgical procedures.
Specifically, a graph learning framework is proposed to encode relational features of instrument parts from both image and kinematics.
A cross-modal contrastive loss is designed to incorporate robust geometric prior from kinematics to image for tip segmentation.
arXiv Detail & Related papers (2023-09-02T14:52:58Z) - POV-Surgery: A Dataset for Egocentric Hand and Tool Pose Estimation
During Surgical Activities [4.989930168854209]
POV-Surgery is a large-scale, synthetic, egocentric dataset focusing on pose estimation for hands with different surgical gloves and three orthopedic surgical instruments.
Our dataset consists of 53 sequences and 88,329 frames, featuring high-resolution RGB-D video streams with activity annotations.
We fine-tune the current SOTA methods on POV-Surgery and further show the generalizability when applying to real-life cases with surgical gloves and tools.
arXiv Detail & Related papers (2023-07-19T18:00:32Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose
Estimation of Surgical Instruments [66.74633676595889]
We present a multi-camera capture setup consisting of static and head-mounted cameras.
Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.
Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - CholecTriplet2022: Show me a tool and tell me the triplet -- an
endoscopic vision challenge for surgical action triplet detection [41.66666272822756]
This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection.
It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool) as the key actors, and the modeling of each tool-activity in the form of instrument, verb, target> triplet.
arXiv Detail & Related papers (2023-02-13T11:53:14Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - Heidelberg Colorectal Data Set for Surgical Data Science in the Sensor
Operating Room [1.6276355161958829]
This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms.
Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery.
arXiv Detail & Related papers (2020-05-07T14:04:29Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.