Towards Generalizable Surgical Activity Recognition Using Spatial
Temporal Graph Convolutional Networks
- URL: http://arxiv.org/abs/2001.03728v4
- Date: Thu, 13 Aug 2020 20:58:03 GMT
- Title: Towards Generalizable Surgical Activity Recognition Using Spatial
Temporal Graph Convolutional Networks
- Authors: Duygu Sarikaya, Pierre Jannin
- Abstract summary: We introduce a modality that is robust to scene variation, and that is able to infer part information such as orientational and relative spatial relationships.
The proposed modality is based on spatial temporal graph representations of surgical tools in videos, for surgical activity recognition.
- Score: 0.40611352512781856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling and recognition of surgical activities poses an interesting research
problem. Although a number of recent works studied automatic recognition of
surgical activities, generalizability of these works across different tasks and
different datasets remains a challenge. We introduce a modality that is robust
to scene variation, and that is able to infer part information such as
orientational and relative spatial relationships. The proposed modality is
based on spatial temporal graph representations of surgical tools in videos,
for surgical activity recognition. To explore its effectiveness, we model and
recognize surgical gestures with the proposed modality. We construct spatial
graphs connecting the joint pose estimations of surgical tools. Then, we
connect each joint to the corresponding joint in the consecutive frames forming
inter-frame edges representing the trajectory of the joint over time. We then
learn hierarchical spatial temporal graph representations using Spatial
Temporal Graph Convolutional Networks (ST-GCN). Our experiments show that
learned spatial temporal graph representations perform well in surgical gesture
recognition even when used individually. We experiment with the Suturing task
of the JIGSAWS dataset where the chance baseline for gesture recognition is
10%. Our results demonstrate 68% average accuracy which suggests a significant
improvement. Learned hierarchical spatial temporal graph representations can be
used either individually, in cascades or as a complementary modality in
surgical activity recognition, therefore provide a benchmark for future
studies. To our knowledge, our paper is the first to use spatial temporal graph
representations of surgical tools, and pose-based skeleton representations in
general, for surgical activity recognition.
Related papers
- VISAGE: Video Synthesis using Action Graphs for Surgery [34.21344214645662]
We introduce the novel task of future video generation in laparoscopic surgery.
Our proposed method, VISAGE, leverages the power of action scene graphs to capture the sequential nature of laparoscopic procedures.
Results of our experiments demonstrate high-fidelity video generation for laparoscopy procedures.
arXiv Detail & Related papers (2024-10-23T10:28:17Z) - SAR-RARP50: Segmentation of surgical instrumentation and Action
Recognition on Robot-Assisted Radical Prostatectomy Challenge [72.97934765570069]
We release the first multimodal, publicly available, in-vivo, dataset for surgical action recognition and semantic instrumentation segmentation, containing 50 suturing video segments of Robotic Assisted Radical Prostatectomy (RARP)
The aim of the challenge is to enable researchers to leverage the scale of the provided dataset and develop robust and highly accurate single-task action recognition and tool segmentation approaches in the surgical domain.
A total of 12 teams participated in the challenge, contributing 7 action recognition methods, 9 instrument segmentation techniques, and 4 multitask approaches that integrated both action recognition and instrument segmentation.
arXiv Detail & Related papers (2023-12-31T13:32:18Z) - Phase-Specific Augmented Reality Guidance for Microscopic Cataract
Surgery Using Long-Short Spatiotemporal Aggregation Transformer [14.568834378003707]
Phaemulsification cataract surgery (PCS) is a routine procedure using a surgical microscope.
PCS guidance systems extract valuable information from surgical microscopic videos to enhance proficiency.
Existing PCS guidance systems suffer from non-phasespecific guidance, leading to redundant visual information.
We propose a novel phase-specific augmented reality (AR) guidance system, which offers tailored AR information corresponding to the recognized surgical phase.
arXiv Detail & Related papers (2023-09-11T02:56:56Z) - GLSFormer : Gated - Long, Short Sequence Transformer for Step
Recognition in Surgical Videos [57.93194315839009]
We propose a vision transformer-based approach to learn temporal features directly from sequence-level patches.
We extensively evaluate our approach on two cataract surgery video datasets, Cataract-101 and D99, and demonstrate superior performance compared to various state-of-the-art methods.
arXiv Detail & Related papers (2023-07-20T17:57:04Z) - Using Human Gaze For Surgical Activity Recognition [0.40611352512781856]
We propose to use human gaze with a spatial temporal attention mechanism for activity recognition in surgical videos.
Our model consists of an I3D-based architecture, learns temporal features using 3D convolutions, as well as learning an attention map using human gaze.
arXiv Detail & Related papers (2022-03-09T14:28:00Z) - Joint-bone Fusion Graph Convolutional Network for Semi-supervised
Skeleton Action Recognition [65.78703941973183]
We propose a novel correlation-driven joint-bone fusion graph convolutional network (CD-JBF-GCN) as an encoder and use a pose prediction head as a decoder.
Specifically, the CD-JBF-GC can explore the motion transmission between the joint stream and the bone stream.
The pose prediction based auto-encoder in the self-supervised training stage allows the network to learn motion representation from unlabeled data.
arXiv Detail & Related papers (2022-02-08T16:03:15Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Structured Landmark Detection via Topology-Adapting Deep Graph Learning [75.20602712947016]
We present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection.
The proposed method constructs graph signals leveraging both local image features and global shape features.
Experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis)
arXiv Detail & Related papers (2020-04-17T11:55:03Z) - Multi-Task Recurrent Neural Network for Surgical Gesture Recognition and
Progress Prediction [17.63619129438996]
We propose a multi-task recurrent neural network for simultaneous recognition of surgical gestures and estimation of a novel formulation of surgical task progress.
We demonstrate that recognition performance improves in multi-task frameworks with progress estimation without any additional manual labelling and training.
arXiv Detail & Related papers (2020-03-10T14:28:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.