Video-based Surgical Skill Assessment using Tree-based Gaussian Process
Classifier
- URL: http://arxiv.org/abs/2312.10208v2
- Date: Thu, 21 Dec 2023 05:44:09 GMT
- Title: Video-based Surgical Skill Assessment using Tree-based Gaussian Process
Classifier
- Authors: Arefeh Rezaei, Mohammad Javad Ahmadi, Amir Molaei, Hamid. D. Taghirad
- Abstract summary: This paper presents a novel pipeline for automated surgical skill assessment using video data.
The pipeline incorporates a representation flow convolutional neural network and a novel tree-based Gaussian process classifier.
The proposed method has the potential to facilitate skill improvement among surgery fellows and enhance patient safety.
- Score: 2.3964255330849356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to present a novel pipeline for automated surgical skill
assessment using video data and to showcase the effectiveness of the proposed
approach in evaluating surgeon proficiency, its potential for targeted training
interventions, and quality assurance in surgical departments. The pipeline
incorporates a representation flow convolutional neural network and a novel
tree-based Gaussian process classifier, which is robust to noise, while being
computationally efficient. Additionally, new kernels are introduced to enhance
accuracy. The performance of the pipeline is evaluated using the JIGSAWS
dataset. Comparative analysis with existing literature reveals significant
improvement in accuracy and betterment in computation cost. The proposed
pipeline contributes to computational efficiency and accuracy improvement in
surgical skill assessment using video data. Results of our study based on
comments of our colleague surgeons show that the proposed method has the
potential to facilitate skill improvement among surgery fellows and enhance
patient safety through targeted training interventions and quality assurance in
surgical departments.
Related papers
- Multi-Modal Self-Supervised Learning for Surgical Feedback Effectiveness Assessment [66.6041949490137]
We propose a method that integrates information from transcribed verbal feedback and corresponding surgical video to predict feedback effectiveness.
Our findings show that both transcribed feedback and surgical video are individually predictive of trainee behavior changes.
Our results demonstrate the potential of multi-modal learning to advance the automated assessment of surgical feedback.
arXiv Detail & Related papers (2024-11-17T00:13:00Z) - ZEAL: Surgical Skill Assessment with Zero-shot Tool Inference Using Unified Foundation Model [0.07143413923310668]
This study introduces ZEAL (surgical skill assessment with Zero-shot surgical tool segmentation with a unifiEd foundAtion modeL)
ZEAL predicts segmentation masks, capturing essential features of both instruments and surroundings.
It produces a surgical skill score, offering an objective measure of proficiency.
arXiv Detail & Related papers (2024-07-03T01:20:56Z) - Hypergraph-Transformer (HGT) for Interactive Event Prediction in
Laparoscopic and Robotic Surgery [50.3022015601057]
We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video.
We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets.
Our results demonstrate the superiority of our approach compared to unstructured alternatives.
arXiv Detail & Related papers (2024-02-03T00:58:05Z) - Deep Multimodal Fusion for Surgical Feedback Classification [70.53297887843802]
We leverage a clinically-validated five-category classification of surgical feedback.
We then develop a multi-label machine learning model to classify these five categories of surgical feedback from inputs of text, audio, and video modalities.
The ultimate goal of our work is to help automate the annotation of real-time contextual surgical feedback at scale.
arXiv Detail & Related papers (2023-12-06T01:59:47Z) - Demonstration-Guided Reinforcement Learning with Efficient Exploration
for Task Automation of Surgical Robot [54.80144694888735]
We introduce Demonstration-guided EXploration (DEX), an efficient reinforcement learning algorithm.
Our method estimates expert-like behaviors with higher values to facilitate productive interactions.
Experiments on $10$ surgical manipulation tasks from SurRoL, a comprehensive surgical simulation platform, demonstrate significant improvements.
arXiv Detail & Related papers (2023-02-20T05:38:54Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - Video-based Formative and Summative Assessment of Surgical Tasks using
Deep Learning [0.8612287536028312]
We propose a deep learning (DL) model that can automatically and objectively provide a high-stakes summative assessment of surgical skill execution.
Formative assessment is generated using heatmaps of visual features that correlate with surgical performance.
arXiv Detail & Related papers (2022-03-17T20:07:48Z) - Real-time Informative Surgical Skill Assessment with Gaussian Process
Learning [12.019641896240245]
This work presents a novel Gaussian Process Learning-based automatic objective surgical skill assessment method for ESSBSs.
The proposed method projects the instrument movements into the endoscope coordinate to reduce the data dimensionality.
The experimental results show that the proposed method reaches 100% prediction precision for complete surgical procedures and 90% precision for real-time prediction assessment.
arXiv Detail & Related papers (2021-12-05T15:35:40Z) - Towards Unified Surgical Skill Assessment [18.601526803020885]
We propose a unified multi-path framework for automatic surgical skill assessment.
We conduct experiments on the JIGSAWS dataset of simulated surgical tasks, and a new clinical dataset of real laparoscopic surgeries.
arXiv Detail & Related papers (2021-06-02T09:06:43Z) - Surgical Skill Assessment on In-Vivo Clinical Data via the Clearness of
Operating Field [18.643159726513133]
Surgical skill assessment is studied in this paper on a real clinical dataset.
The clearness of operating field (COF) is identified as a good proxy for overall surgical skills.
An objective and automated framework is proposed to predict surgical skills through the proxy of COF.
In experiments, the proposed method achieves 0.55 Spearman's correlation with the ground truth of overall technical skill.
arXiv Detail & Related papers (2020-08-27T07:12:16Z) - Automatic Gesture Recognition in Robot-assisted Surgery with
Reinforcement Learning and Tree Search [63.07088785532908]
We propose a framework based on reinforcement learning and tree search for joint surgical gesture segmentation and classification.
Our framework consistently outperforms the existing methods on the suturing task of JIGSAWS dataset in terms of accuracy, edit score and F1 score.
arXiv Detail & Related papers (2020-02-20T13:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.