Enhancing Surgical Performance in Cardiothoracic Surgery with
Innovations from Computer Vision and Artificial Intelligence: A Narrative
Review
- URL: http://arxiv.org/abs/2402.11288v1
- Date: Sat, 17 Feb 2024 14:16:25 GMT
- Title: Enhancing Surgical Performance in Cardiothoracic Surgery with
Innovations from Computer Vision and Artificial Intelligence: A Narrative
Review
- Authors: Merryn D. Constable, Hubert P. H. Shum, Stephen Clark
- Abstract summary: This narrative review synthesises work on technical and non-technical surgical skills, task performance, and pose estimation.
It illustrates new opportunities to advance cardiothoracic surgical performance with innovations from computer vision and artificial intelligence.
- Score: 12.241487673677517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When technical requirements are high, and patient outcomes are critical,
opportunities for monitoring and improving surgical skills via objective motion
analysis feedback may be particularly beneficial. This narrative review
synthesises work on technical and non-technical surgical skills, collaborative
task performance, and pose estimation to illustrate new opportunities to
advance cardiothoracic surgical performance with innovations from computer
vision and artificial intelligence. These technological innovations are
critically evaluated in terms of the benefits they could offer the
cardiothoracic surgical community, and any barriers to the uptake of the
technology are elaborated upon. Like some other specialities, cardiothoracic
surgery has relatively few opportunities to benefit from tools with data
capture technology embedded within them (as with robotic-assisted laparoscopic
surgery, for example). In such cases, pose estimation techniques that allow for
movement tracking across a conventional operating field without using
specialist equipment or markers offer considerable potential. With video data
from either simulated or real surgical procedures, these tools can (1) provide
insight into the development of expertise and surgical performance over a
surgeon's career, (2) provide feedback to trainee surgeons regarding areas for
improvement, (3) provide the opportunity to investigate what aspects of skill
may be linked to patient outcomes which can (4) inform the aspects of surgical
skill which should be focused on within training or mentoring programmes.
Classifier or assessment algorithms that use artificial intelligence to 'learn'
what expertise is from expert surgical evaluators could further assist
educators in determining if trainees meet competency thresholds.
Related papers
- Hypergraph-Transformer (HGT) for Interactive Event Prediction in
Laparoscopic and Robotic Surgery [50.3022015601057]
We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video.
We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets.
Our results demonstrate the superiority of our approach compared to unstructured alternatives.
arXiv Detail & Related papers (2024-02-03T00:58:05Z) - Deep Multimodal Fusion for Surgical Feedback Classification [70.53297887843802]
We leverage a clinically-validated five-category classification of surgical feedback.
We then develop a multi-label machine learning model to classify these five categories of surgical feedback from inputs of text, audio, and video modalities.
The ultimate goal of our work is to help automate the annotation of real-time contextual surgical feedback at scale.
arXiv Detail & Related papers (2023-12-06T01:59:47Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Demonstration-Guided Reinforcement Learning with Efficient Exploration
for Task Automation of Surgical Robot [54.80144694888735]
We introduce Demonstration-guided EXploration (DEX), an efficient reinforcement learning algorithm.
Our method estimates expert-like behaviors with higher values to facilitate productive interactions.
Experiments on $10$ surgical manipulation tasks from SurRoL, a comprehensive surgical simulation platform, demonstrate significant improvements.
arXiv Detail & Related papers (2023-02-20T05:38:54Z) - Objective Surgical Skills Assessment and Tool Localization: Results from
the MICCAI 2021 SimSurgSkill Challenge [11.007322707874184]
SimSurgSkill 2021 (hosted as a sub-challenge of EndoVis at MICCAI 2021) aimed to promote and foster work in this endeavor.
Competitors were tasked with localizing instruments and predicting surgical skill.
Using this publicly available dataset and results as a springboard, future work may enable more efficient training of surgeons with advances in surgical data science.
arXiv Detail & Related papers (2022-12-08T18:14:52Z) - Quantification of Robotic Surgeries with Vision-Based Deep Learning [45.165919577877695]
We propose a unified deep learning framework, entitled Roboformer, which operates exclusively on videos recorded during surgery.
We validated our framework on four video-based datasets of two commonly-encountered types of steps within minimally-invasive robotic surgeries.
arXiv Detail & Related papers (2022-05-06T06:08:35Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - Integrating Artificial Intelligence and Augmented Reality in Robotic
Surgery: An Initial dVRK Study Using a Surgical Education Scenario [15.863254207155835]
We develop a novel robotic surgery education system by integrating artificial intelligence surgical module and augmented reality visualization.
The proposed system is evaluated through a preliminary experiment on surgical education task peg-transfer.
arXiv Detail & Related papers (2022-01-02T17:34:10Z) - Real-time Informative Surgical Skill Assessment with Gaussian Process
Learning [12.019641896240245]
This work presents a novel Gaussian Process Learning-based automatic objective surgical skill assessment method for ESSBSs.
The proposed method projects the instrument movements into the endoscope coordinate to reduce the data dimensionality.
The experimental results show that the proposed method reaches 100% prediction precision for complete surgical procedures and 90% precision for real-time prediction assessment.
arXiv Detail & Related papers (2021-12-05T15:35:40Z) - Generational Frameshifts in Technology: Computer Science and
Neurosurgery, The VR Use Case [0.0]
The democratization of neurosurgery is at hand and will be driven by our development, extraction, and adoption of these tools of the modern world.
The ability to perform surgery more safely and more efficiently while capturing the operative details and parsing each component of the operation will open an entirely new epoch advancing our field and all surgical specialties.
arXiv Detail & Related papers (2021-10-08T20:02:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.