Surgical task expertise detected by a self-organizing neural network map
- URL: http://arxiv.org/abs/2106.08995v1
- Date: Thu, 3 Jun 2021 10:48:10 GMT
- Title: Surgical task expertise detected by a self-organizing neural network map
- Authors: Birgitta Dresp-Langley, Rongrong Liu, John M. Wandeto
- Abstract summary: Grip force variability in a true expert and a complete novice executing a robot assisted surgical simulator task reveal statistically significant differences as a function of task expertise.
We show that the skill specific differences in local grip forces are predicted by the output metric of a Self Organizing neural network Map.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Individual grip force profiling of bimanual simulator task performance of
experts and novices using a robotic control device designed for endoscopic
surgery permits defining benchmark criteria that tell true expert task skills
from the skills of novices or trainee surgeons. Grip force variability in a
true expert and a complete novice executing a robot assisted surgical simulator
task reveal statistically significant differences as a function of task
expertise. Here we show that the skill specific differences in local grip
forces are predicted by the output metric of a Self Organizing neural network
Map (SOM) with a bio inspired functional architecture that maps the functional
connectivity of somatosensory neural networks in the primate brain.
Related papers
- Hypergraph-Transformer (HGT) for Interactive Event Prediction in
Laparoscopic and Robotic Surgery [50.3022015601057]
We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video.
We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets.
Our results demonstrate the superiority of our approach compared to unstructured alternatives.
arXiv Detail & Related papers (2024-02-03T00:58:05Z) - SurGNN: Explainable visual scene understanding and assessment of
surgical skill using graph neural networks [19.57785997767885]
This paper explores how graph neural networks (GNNs) can be used to enhance visual scene understanding and surgical skill assessment.
GNNs provide interpretable results, revealing the specific actions, instruments, or anatomical structures that contribute to the predicted skill metrics.
arXiv Detail & Related papers (2023-08-24T20:32:57Z) - RObotic MAnipulation Network (ROMAN) $\unicode{x2013}$ Hybrid
Hierarchical Learning for Solving Complex Sequential Tasks [70.69063219750952]
We present a Hybrid Hierarchical Learning framework, the Robotic Manipulation Network (ROMAN)
ROMAN achieves task versatility and robust failure recovery by integrating behavioural cloning, imitation learning, and reinforcement learning.
Experimental results show that by orchestrating and activating these specialised manipulation experts, ROMAN generates correct sequential activations for accomplishing long sequences of sophisticated manipulation tasks.
arXiv Detail & Related papers (2023-06-30T20:35:22Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Spatiotemporal modeling of grip forces captures proficiency in manual
robot control [5.504040521972806]
This paper builds on our previous work by exploiting Artificial Intelligence to predict individual grip force variability in manual robot control.
Statistical analyses bring to the fore skill specific temporal variations in thousands of grip forces of a complete novice and a highly proficient expert.
arXiv Detail & Related papers (2023-03-03T15:08:00Z) - Multi-Task Neural Processes [105.22406384964144]
We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
arXiv Detail & Related papers (2021-11-10T17:27:46Z) - Discovering Generalizable Skills via Automated Generation of Diverse
Tasks [82.16392072211337]
We propose a method to discover generalizable skills via automated generation of a diverse set of tasks.
As opposed to prior work on unsupervised discovery of skills, our method pairs each skill with a unique task produced by a trainable task generator.
A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective.
The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks.
arXiv Detail & Related papers (2021-06-26T03:41:51Z) - Autonomously Navigating a Surgical Tool Inside the Eye by Learning from
Demonstration [28.720332497794292]
We propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task.
A deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user.
We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 microns accuracy in physical experiments and 94 microns in simulation on average.
arXiv Detail & Related papers (2020-11-16T08:30:02Z) - Recurrent and Spiking Modeling of Sparse Surgical Kinematics [0.8458020117487898]
A growing number of studies have used machine learning to analyze video and kinematic data captured from surgical robots.
In this study, we explore the possibility of using only kinematic data to predict surgeons of similar skill levels.
We report that it is possible to identify surgical fellows receiving near perfect scores in the simulation exercises based on their motion characteristics alone.
arXiv Detail & Related papers (2020-05-12T15:41:45Z) - SuPer Deep: A Surgical Perception Framework for Robotic Tissue
Manipulation using Deep Learning for Feature Extraction [25.865648975312407]
We exploit deep learning methods for surgical perception.
We integrated deep neural networks, capable of efficient feature extraction, into the tissue reconstruction and instrument pose estimation processes.
Our framework achieves state-of-the-art tracking performance in a surgical environment by utilizing deep learning for feature extraction.
arXiv Detail & Related papers (2020-03-07T00:08:30Z) - Automatic Gesture Recognition in Robot-assisted Surgery with
Reinforcement Learning and Tree Search [63.07088785532908]
We propose a framework based on reinforcement learning and tree search for joint surgical gesture segmentation and classification.
Our framework consistently outperforms the existing methods on the suturing task of JIGSAWS dataset in terms of accuracy, edit score and F1 score.
arXiv Detail & Related papers (2020-02-20T13:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.