Objective Surgical Skills Assessment and Tool Localization: Results from
the MICCAI 2021 SimSurgSkill Challenge
- URL: http://arxiv.org/abs/2212.04448v1
- Date: Thu, 8 Dec 2022 18:14:52 GMT
- Title: Objective Surgical Skills Assessment and Tool Localization: Results from
the MICCAI 2021 SimSurgSkill Challenge
- Authors: Aneeq Zia, Kiran Bhattacharyya, Xi Liu, Ziheng Wang, Max Berniker,
Satoshi Kondo, Emanuele Colleoni, Dimitris Psychogyios, Yueming Jin, Jinfan
Zhou, Evangelos Mazomenos, Lena Maier-Hein, Danail Stoyanov, Stefanie
Speidel, Anthony Jarc
- Abstract summary: SimSurgSkill 2021 (hosted as a sub-challenge of EndoVis at MICCAI 2021) aimed to promote and foster work in this endeavor.
Competitors were tasked with localizing instruments and predicting surgical skill.
Using this publicly available dataset and results as a springboard, future work may enable more efficient training of surgeons with advances in surgical data science.
- Score: 11.007322707874184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Timely and effective feedback within surgical training plays a critical role
in developing the skills required to perform safe and efficient surgery.
Feedback from expert surgeons, while especially valuable in this regard, is
challenging to acquire due to their typically busy schedules, and may be
subject to biases. Formal assessment procedures like OSATS and GEARS attempt to
provide objective measures of skill, but remain time-consuming. With advances
in machine learning there is an opportunity for fast and objective automated
feedback on technical skills. The SimSurgSkill 2021 challenge (hosted as a
sub-challenge of EndoVis at MICCAI 2021) aimed to promote and foster work in
this endeavor. Using virtual reality (VR) surgical tasks, competitors were
tasked with localizing instruments and predicting surgical skill. Here we
summarize the winning approaches and how they performed. Using this publicly
available dataset and results as a springboard, future work may enable more
efficient training of surgeons with advances in surgical data science. The
dataset can be accessed from
https://console.cloud.google.com/storage/browser/isi-simsurgskill-2021.
Related papers
- Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom [9.41936397281689]
Improved surgical skill is generally associated with improved patient outcomes, but assessment is subjective and labour-intensive.
A new public dataset is introduced, focusing on simulated surgery, using the nasal phase of endoscopic pituitary surgery as an exemplar.
A Multilayer Perceptron achieved 87% accuracy in predicting surgical skill level (novice or expert), with the "ratio of total procedure time to instrument visible time" correlated with higher surgical skill.
arXiv Detail & Related papers (2024-09-25T15:27:44Z) - PitVis-2023 Challenge: Workflow Recognition in videos of Endoscopic Pituitary Surgery [46.2901962659261]
The Pituitary Vision (VisVis) 2023 Challenge tasks the community to step and instrument recognition in videos of endoscopic pituitary surgery.
This is a unique task when compared to other minimally invasive surgeries due to the smaller working space.
There were 18-s from 9-teams across 6-countries, using a variety of deep learning models.
arXiv Detail & Related papers (2024-09-02T11:38:06Z) - Enhancing Surgical Performance in Cardiothoracic Surgery with
Innovations from Computer Vision and Artificial Intelligence: A Narrative
Review [12.241487673677517]
This narrative review synthesises work on technical and non-technical surgical skills, task performance, and pose estimation.
It illustrates new opportunities to advance cardiothoracic surgical performance with innovations from computer vision and artificial intelligence.
arXiv Detail & Related papers (2024-02-17T14:16:25Z) - SAR-RARP50: Segmentation of surgical instrumentation and Action
Recognition on Robot-Assisted Radical Prostatectomy Challenge [72.97934765570069]
We release the first multimodal, publicly available, in-vivo, dataset for surgical action recognition and semantic instrumentation segmentation, containing 50 suturing video segments of Robotic Assisted Radical Prostatectomy (RARP)
The aim of the challenge is to enable researchers to leverage the scale of the provided dataset and develop robust and highly accurate single-task action recognition and tool segmentation approaches in the surgical domain.
A total of 12 teams participated in the challenge, contributing 7 action recognition methods, 9 instrument segmentation techniques, and 4 multitask approaches that integrated both action recognition and instrument segmentation.
arXiv Detail & Related papers (2023-12-31T13:32:18Z) - Deep Multimodal Fusion for Surgical Feedback Classification [70.53297887843802]
We leverage a clinically-validated five-category classification of surgical feedback.
We then develop a multi-label machine learning model to classify these five categories of surgical feedback from inputs of text, audio, and video modalities.
The ultimate goal of our work is to help automate the annotation of real-time contextual surgical feedback at scale.
arXiv Detail & Related papers (2023-12-06T01:59:47Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Demonstration-Guided Reinforcement Learning with Efficient Exploration
for Task Automation of Surgical Robot [54.80144694888735]
We introduce Demonstration-guided EXploration (DEX), an efficient reinforcement learning algorithm.
Our method estimates expert-like behaviors with higher values to facilitate productive interactions.
Experiments on $10$ surgical manipulation tasks from SurRoL, a comprehensive surgical simulation platform, demonstrate significant improvements.
arXiv Detail & Related papers (2023-02-20T05:38:54Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - Real-time Informative Surgical Skill Assessment with Gaussian Process
Learning [12.019641896240245]
This work presents a novel Gaussian Process Learning-based automatic objective surgical skill assessment method for ESSBSs.
The proposed method projects the instrument movements into the endoscope coordinate to reduce the data dimensionality.
The experimental results show that the proposed method reaches 100% prediction precision for complete surgical procedures and 90% precision for real-time prediction assessment.
arXiv Detail & Related papers (2021-12-05T15:35:40Z) - The SARAS Endoscopic Surgeon Action Detection (ESAD) dataset: Challenges
and methods [15.833413083110903]
This paper presents ESAD, the first large-scale dataset designed to tackle the problem of surgeon action detection in endoscopic minimally invasive surgery.
The dataset provides bounding box annotation for 21 action classes on real endoscopic video frames captured during prostatectomy, and was used as the basis of a recent MIDL 2020 challenge.
arXiv Detail & Related papers (2021-04-07T15:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.