Machine Learning-Based Automated Assessment of Intracorporeal Suturing in Laparoscopic Fundoplication
- URL: http://arxiv.org/abs/2412.16195v1
- Date: Mon, 16 Dec 2024 17:44:44 GMT
- Title: Machine Learning-Based Automated Assessment of Intracorporeal Suturing in Laparoscopic Fundoplication
- Authors: Shekhar Madhav Khairnar, Huu Phong Nguyen, Alexis Desir, Carla Holcomb, Daniel J. Scott, Ganesh Sankaranarayanan,
- Abstract summary: We developed AI-based tool tracking using the Segment Anything Model (SAM) to eliminate the need for human annotators.
An automated tool tracking model was applied to recorded videos of Nissen fundoplication on porcine bowel.
Performance was assessed using supervised and unsupervised models, and an ablation study compared results.
- Score: 0.0
- License:
- Abstract: Automated assessment of surgical skills using artificial intelligence (AI) provides trainees with instantaneous feedback. After bimanual tool motions are captured, derived kinematic metrics are reliable predictors of performance in laparoscopic tasks. Implementing automated tool tracking requires time-intensive human annotation. We developed AI-based tool tracking using the Segment Anything Model (SAM) to eliminate the need for human annotators. Here, we describe a study evaluating the usefulness of our tool tracking model in automated assessment during a laparoscopic suturing task in the fundoplication procedure. An automated tool tracking model was applied to recorded videos of Nissen fundoplication on porcine bowel. Surgeons were grouped as novices (PGY1-2) and experts (PGY3-5, attendings). The beginning and end of each suturing step were segmented, and motions of the left and right tools were extracted. A low-pass filter with a 24 Hz cut-off frequency removed noise. Performance was assessed using supervised and unsupervised models, and an ablation study compared results. Kinematic features--RMS velocity, RMS acceleration, RMS jerk, total path length, and Bimanual Dexterity--were extracted and analyzed using Logistic Regression, Random Forest, Support Vector Classifier, and XGBoost. PCA was performed for feature reduction. For unsupervised learning, a Denoising Autoencoder (DAE) model with classifiers, such as a 1-D CNN and traditional models, was trained. Data were extracted for 28 participants (9 novices, 19 experts). Supervised learning with PCA and Random Forest achieved an accuracy of 0.795 and an F1 score of 0.778. The unsupervised 1-D CNN achieved superior results with an accuracy of 0.817 and an F1 score of 0.806, eliminating the need for kinematic feature computation. We demonstrated an AI model capable of automated performance classification, independent of human annotation.
Related papers
- Exploring Machine Learning Engineering for Object Detection and Tracking by Unmanned Aerial Vehicle (UAV) [3.600782980481468]
This research effort focuses on the development of a machine learning pipeline emphasizing the inclusion of assurance methods with increasing automation.
A new dataset was created by collecting videos of moving object such as Roomba vacuum cleaner, emulating search and rescue (SAR) for indoor environment.
After the refinement of the dataset it was trained on a second YOLOv4 and a Mask R-CNN model, which is deployed on a Parrot Mambo drone to perform real-time object detection and tracking.
arXiv Detail & Related papers (2024-12-19T19:27:31Z) - Self-Supervised Pretraining Improves Performance and Inference
Efficiency in Multiple Lung Ultrasound Interpretation Tasks [65.23740556896654]
We investigated whether self-supervised pretraining could produce a neural network feature extractor applicable to multiple classification tasks in lung ultrasound analysis.
When fine-tuning on three lung ultrasound tasks, pretrained models resulted in an improvement of the average across-task area under the receiver operating curve (AUC) by 0.032 and 0.061 on local and external test sets respectively.
arXiv Detail & Related papers (2023-09-05T21:36:42Z) - Application of the nnU-Net for automatic segmentation of lung lesion on
CT images, and implication on radiomic models [1.8231394717039833]
A deep-learning automatic segmentation method was applied on computed tomography images of non-small-cell lung cancer patients.
The use of manual vs automatic segmentation in the performance of survival radiomic models was assessed, as well.
arXiv Detail & Related papers (2022-09-24T15:04:23Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Automatic identification of segmentation errors for radiotherapy using
geometric learning [0.0]
The aim of this work was to develop a tool to automatically identify errors in 3D OAR segmentations without a ground truth.
The proposed model is trained using self-supervised learning using a synthetically-generated dataset of segmentations of the parotid.
Our best performing model predicted errors on the parotid gland with a precision of 85.0% & 89.7% for internal and external errors respectively.
arXiv Detail & Related papers (2022-06-27T14:01:52Z) - A Machine Learning Framework for Automatic Prediction of Human Semen
Motility [7.167550590287388]
Several regression models are trained to automatically predict the percentage (0 to 100) of progressive, non-progressive, and immotile spermatozoa in a given sample.
Four machine learning models, including linear Support Vector Regressor (SVR), Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN)
Best results for predicting motility are achieved by using the Crocker-Grier algorithm to track sperm cells in an unsupervised way.
arXiv Detail & Related papers (2021-09-16T15:26:40Z) - Anomaly Detection in Cybersecurity: Unsupervised, Graph-Based and
Supervised Learning Methods in Adversarial Environments [63.942632088208505]
Inherent to today's operating environment is the practice of adversarial machine learning.
In this work, we examine the feasibility of unsupervised learning and graph-based methods for anomaly detection.
We incorporate a realistic adversarial training mechanism when training our supervised models to enable strong classification performance in adversarial environments.
arXiv Detail & Related papers (2021-05-14T10:05:10Z) - One to Many: Adaptive Instrument Segmentation via Meta Learning and
Dynamic Online Adaptation in Robotic Surgical Video [71.43912903508765]
MDAL is a dynamic online adaptive learning scheme for instrument segmentation in robot-assisted surgery.
It learns the general knowledge of instruments and the fast adaptation ability through the video-specific meta-learning paradigm.
It outperforms other state-of-the-art methods on two datasets.
arXiv Detail & Related papers (2021-03-24T05:02:18Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Classification Under Human Assistance [29.220005688025378]
We show that supervised learning models trained to operate under different automation levels can outperform those trained for full automation as well as humans operating alone.
Experiments on synthetic and real-world data from several applications in medical diagnosis illustrate our theoretical findings.
arXiv Detail & Related papers (2020-06-21T16:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.