Real-time Informative Surgical Skill Assessment with Gaussian Process
Learning
- URL: http://arxiv.org/abs/2112.02598v1
- Date: Sun, 5 Dec 2021 15:35:40 GMT
- Title: Real-time Informative Surgical Skill Assessment with Gaussian Process
Learning
- Authors: Yangming Li, Randall Bly, Sarah Akkina, Rajeev C. Saxena, Ian
Humphreys, Mark Whipple, Kris Moe, Blake Hannaford
- Abstract summary: This work presents a novel Gaussian Process Learning-based automatic objective surgical skill assessment method for ESSBSs.
The proposed method projects the instrument movements into the endoscope coordinate to reduce the data dimensionality.
The experimental results show that the proposed method reaches 100% prediction precision for complete surgical procedures and 90% precision for real-time prediction assessment.
- Score: 12.019641896240245
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Endoscopic Sinus and Skull Base Surgeries (ESSBSs) is a challenging and
potentially dangerous surgical procedure, and objective skill assessment is the
key components to improve the effectiveness of surgical training, to
re-validate surgeons' skills, and to decrease surgical trauma and the
complication rate in operating rooms. Because of the complexity of surgical
procedures, the variation of operation styles, and the fast development of new
surgical skills, the surgical skill assessment remains a challenging problem.
This work presents a novel Gaussian Process Learning-based heuristic automatic
objective surgical skill assessment method for ESSBSs. Different with classical
surgical skill assessment algorithms, the proposed method 1) utilizes the
kinematic features in surgical instrument relative movements, instead of using
specific surgical tasks or the statistics to assess skills in real-time; 2)
provide informative feedback, instead of a summative scores; 3) has the ability
to incrementally learn from new data, instead of depending on a fixed dataset.
The proposed method projects the instrument movements into the endoscope
coordinate to reduce the data dimensionality. It then extracts the kinematic
features of the projected data and learns the relationship between surgical
skill levels and the features with the Gaussian Process learning technique. The
proposed method was verified in full endoscopic skull base and sinus surgeries
on cadavers. These surgeries have different pathology, requires different
treatment and has different complexities. The experimental results show that
the proposed method reaches 100\% prediction precision for complete surgical
procedures and 90\% precision for real-time prediction assessment.
Related papers
- ZEAL: Surgical Skill Assessment with Zero-shot Tool Inference Using Unified Foundation Model [0.07143413923310668]
This study introduces ZEAL (surgical skill assessment with Zero-shot surgical tool segmentation with a unifiEd foundAtion modeL)
ZEAL predicts segmentation masks, capturing essential features of both instruments and surroundings.
It produces a surgical skill score, offering an objective measure of proficiency.
arXiv Detail & Related papers (2024-07-03T01:20:56Z) - Video-based Surgical Skill Assessment using Tree-based Gaussian Process
Classifier [2.3964255330849356]
This paper presents a novel pipeline for automated surgical skill assessment using video data.
The pipeline incorporates a representation flow convolutional neural network and a novel tree-based Gaussian process classifier.
The proposed method has the potential to facilitate skill improvement among surgery fellows and enhance patient safety.
arXiv Detail & Related papers (2023-12-15T21:06:22Z) - Deep Multimodal Fusion for Surgical Feedback Classification [70.53297887843802]
We leverage a clinically-validated five-category classification of surgical feedback.
We then develop a multi-label machine learning model to classify these five categories of surgical feedback from inputs of text, audio, and video modalities.
The ultimate goal of our work is to help automate the annotation of real-time contextual surgical feedback at scale.
arXiv Detail & Related papers (2023-12-06T01:59:47Z) - Safe Deep RL for Intraoperative Planning of Pedicle Screw Placement [61.28459114068828]
We propose an intraoperative planning approach for robotic spine surgery that leverages real-time observation for drill path planning based on Safe Deep Reinforcement Learning (DRL)
Our approach was capable of achieving 90% bone penetration with respect to the gold standard (GS) drill planning.
arXiv Detail & Related papers (2023-05-09T11:42:53Z) - Demonstration-Guided Reinforcement Learning with Efficient Exploration
for Task Automation of Surgical Robot [54.80144694888735]
We introduce Demonstration-guided EXploration (DEX), an efficient reinforcement learning algorithm.
Our method estimates expert-like behaviors with higher values to facilitate productive interactions.
Experiments on $10$ surgical manipulation tasks from SurRoL, a comprehensive surgical simulation platform, demonstrate significant improvements.
arXiv Detail & Related papers (2023-02-20T05:38:54Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - Quantification of Robotic Surgeries with Vision-Based Deep Learning [45.165919577877695]
We propose a unified deep learning framework, entitled Roboformer, which operates exclusively on videos recorded during surgery.
We validated our framework on four video-based datasets of two commonly-encountered types of steps within minimally-invasive robotic surgeries.
arXiv Detail & Related papers (2022-05-06T06:08:35Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - Towards Unified Surgical Skill Assessment [18.601526803020885]
We propose a unified multi-path framework for automatic surgical skill assessment.
We conduct experiments on the JIGSAWS dataset of simulated surgical tasks, and a new clinical dataset of real laparoscopic surgeries.
arXiv Detail & Related papers (2021-06-02T09:06:43Z) - Learning Invariant Representation of Tasks for Robust Surgical State
Estimation [39.515036686428836]
We propose StiseNet, a Surgical Task Invariance State Estimation Network.
StiseNet minimizes the effects of variations in surgical technique and operating environments inherent to RAS datasets.
It is shown to outperform state-of-the-art state estimation methods on three datasets.
arXiv Detail & Related papers (2021-02-18T02:32:50Z) - Surgical Skill Assessment on In-Vivo Clinical Data via the Clearness of
Operating Field [18.643159726513133]
Surgical skill assessment is studied in this paper on a real clinical dataset.
The clearness of operating field (COF) is identified as a good proxy for overall surgical skills.
An objective and automated framework is proposed to predict surgical skills through the proxy of COF.
In experiments, the proposed method achieves 0.55 Spearman's correlation with the ground truth of overall technical skill.
arXiv Detail & Related papers (2020-08-27T07:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.