Towards Surgical Context Inference and Translation to Gestures
- URL: http://arxiv.org/abs/2302.14237v1
- Date: Tue, 28 Feb 2023 01:39:36 GMT
- Title: Towards Surgical Context Inference and Translation to Gestures
- Authors: Kay Hutchinson, Zongyu Li, Ian Reyes, Homa Alemzadeh
- Abstract summary: Manual labeling of gestures in robot-assisted surgery is labor intensive, prone to errors, and requires expertise or training.
We propose a method for automated and explainable generation of gesture transcripts.
- Score: 1.858151490268935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Manual labeling of gestures in robot-assisted surgery is labor intensive,
prone to errors, and requires expertise or training. We propose a method for
automated and explainable generation of gesture transcripts that leverages the
abundance of data for image segmentation to train a surgical scene segmentation
model that provides surgical tool and object masks. Surgical context is
detected using segmentation masks by examining the distances and intersections
between the tools and objects. Next, context labels are translated into gesture
transcripts using knowledge-based Finite State Machine (FSM) and data-driven
Long Short Term Memory (LSTM) models. We evaluate the performance of each stage
of our method by comparing the results with the ground truth segmentation
masks, the consensus context labels, and the gesture labels in the JIGSAWS
dataset. Our results show that our segmentation models achieve state-of-the-art
performance in recognizing needle and thread in Suturing and we can
automatically detect important surgical states with high agreement with
crowd-sourced labels (e.g., contact between graspers and objects in Suturing).
We also find that the FSM models are more robust to poor segmentation and
labeling performance than LSTMs. Our proposed method can significantly shorten
the gesture labeling process (~2.8 times).
Related papers
- Surgical-DeSAM: Decoupling SAM for Instrument Segmentation in Robotic Surgery [9.466779367920049]
In safety-critical surgical tasks, prompting is not possible due to lack of per-frame prompts for supervised learning.
It is unrealistic to prompt frame-by-frame in a real-time tracking application, and it is expensive to annotate prompts for offline applications.
We develop Surgical-DeSAM to generate automatic bounding box prompts for decoupling SAM to obtain instrument segmentation in real-time robotic surgery.
arXiv Detail & Related papers (2024-04-22T09:53:55Z) - Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - PWISeg: Point-based Weakly-supervised Instance Segmentation for Surgical
Instruments [27.89003436883652]
We propose a weakly-supervised surgical instrument segmentation approach, named Point-based Weakly-supervised Instance (PWISeg)
PWISeg adopts an FCN-based architecture with point-to-box and point-to-mask branches to model the relationships between feature points and bounding boxes.
Based on this, we propose a key pixel association loss and a key pixel distribution loss, driving the point-to-mask branch to generate more accurate segmentation predictions.
arXiv Detail & Related papers (2023-11-16T11:48:29Z) - Robotic Scene Segmentation with Memory Network for Runtime Surgical
Context Inference [8.600278838838163]
Space Time Correspondence Network (STCN) is a memory network that performs binary segmentation and minimizes the effects of class imbalance.
We show that STCN achieves superior segmentation performance for objects that are difficult to segment, such as needle and thread.
We also demonstrate that segmentation and context inference can be performed at runtime without compromising performance.
arXiv Detail & Related papers (2023-08-24T13:44:55Z) - Data-Limited Tissue Segmentation using Inpainting-Based Self-Supervised
Learning [3.7931881761831328]
Self-supervised learning (SSL) methods involving pretext tasks have shown promise in overcoming this requirement by first pretraining models using unlabeled data.
We evaluate the efficacy of two SSL methods (inpainting-based pretext tasks of context prediction and context restoration) for CT and MRI image segmentation in label-limited scenarios.
We demonstrate that optimally trained and easy-to-implement SSL segmentation models can outperform classically supervised methods for MRI and CT tissue segmentation in label-limited scenarios.
arXiv Detail & Related papers (2022-10-14T16:34:05Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - TraSeTR: Track-to-Segment Transformer with Contrastive Query for
Instance-level Instrument Segmentation in Robotic Surgery [60.439434751619736]
We propose TraSeTR, a Track-to-Segment Transformer that exploits tracking cues to assist surgical instrument segmentation.
TraSeTR jointly reasons about the instrument type, location, and identity with instance-level predictions.
The effectiveness of our method is demonstrated with state-of-the-art instrument type segmentation results on three public datasets.
arXiv Detail & Related papers (2022-02-17T05:52:18Z) - FUN-SIS: a Fully UNsupervised approach for Surgical Instrument
Segmentation [16.881624842773604]
We present FUN-SIS, a Fully-supervised approach for binary Surgical Instrument.
We train a per-frame segmentation model on completely unlabelled endoscopic videos, by relying on implicit motion information and instrument shape-priors.
The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2022-02-16T15:32:02Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Co-Generation and Segmentation for Generalized Surgical Instrument
Segmentation on Unlabelled Data [49.419268399590045]
Surgical instrument segmentation for robot-assisted surgery is needed for accurate instrument tracking and augmented reality overlays.
Deep learning-based methods have shown state-of-the-art performance for surgical instrument segmentation, but their results depend on labelled data.
In this paper, we demonstrate the limited generalizability of these methods on different datasets, including human robot-assisted surgeries.
arXiv Detail & Related papers (2021-03-16T18:41:18Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.