Learning and Reasoning with the Graph Structure Representation in
Robotic Surgery
- URL: http://arxiv.org/abs/2007.03357v3
- Date: Thu, 10 Sep 2020 21:50:45 GMT
- Title: Learning and Reasoning with the Graph Structure Representation in
Robotic Surgery
- Authors: Mobarakol Islam, Lalithkumar Seenivasan, Lim Chwee Ming, Hongliang Ren
- Abstract summary: Learning to infer graph representations can play a vital role in surgical scene understanding in robotic surgery.
We develop an approach to generate the scene graph and predict surgical interactions between instruments and surgical region of interest.
- Score: 15.490603884631764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to infer graph representations and performing spatial reasoning in a
complex surgical environment can play a vital role in surgical scene
understanding in robotic surgery. For this purpose, we develop an approach to
generate the scene graph and predict surgical interactions between instruments
and surgical region of interest (ROI) during robot-assisted surgery. We design
an attention link function and integrate with a graph parsing network to
recognize the surgical interactions. To embed each node with corresponding
neighbouring node features, we further incorporate SageConv into the network.
The scene graph generation and active edge classification mostly depend on the
embedding or feature extraction of node and edge features from complex image
representation. Here, we empirically demonstrate the feature extraction methods
by employing label smoothing weighted loss. Smoothing the hard label can avoid
the over-confident prediction of the model and enhances the feature
representation learned by the penultimate layer. To obtain the graph scene
label, we annotate the bounding box and the instrument-ROI interactions on the
robotic scene segmentation challenge 2018 dataset with an experienced clinical
expert in robotic surgery and employ it to evaluate our propositions.
Related papers
- Revisiting Surgical Instrument Segmentation Without Human Intervention: A Graph Partitioning View [7.594796294925481]
We propose an unsupervised method by reframing the video frame segmentation as a graph partitioning problem.
A self-supervised pre-trained model is firstly leveraged as a feature extractor to capture high-level semantic features.
On the "deep" eigenvectors, a surgical video frame is meaningfully segmented into different modules like tools and tissues, providing distinguishable semantic information.
arXiv Detail & Related papers (2024-08-27T05:31:30Z) - SANGRIA: Surgical Video Scene Graph Optimization for Surgical Workflow Prediction [37.86132786212667]
We introduce an end-to-end framework for the generation and optimization of surgical scene graphs.
Our solution outperforms the SOTA on the CATARACTS dataset by 8% accuracy and 10% F1 score in surgical workflow.
arXiv Detail & Related papers (2024-07-29T17:44:34Z) - Hypergraph-Transformer (HGT) for Interactive Event Prediction in
Laparoscopic and Robotic Surgery [50.3022015601057]
We propose a predictive neural network that is capable of understanding and predicting critical interactive aspects of surgical workflow from intra-abdominal video.
We verify our approach on established surgical datasets and applications, including the detection and prediction of action triplets.
Our results demonstrate the superiority of our approach compared to unstructured alternatives.
arXiv Detail & Related papers (2024-02-03T00:58:05Z) - Dynamic Scene Graph Representation for Surgical Video [37.22552586793163]
We exploit scene graphs as a more holistic, semantically meaningful and human-readable way to represent surgical videos.
We create a scene graph dataset from semantic segmentations from the CaDIS and CATARACTS datasets.
We demonstrate the benefits of surgical scene graphs regarding the explainability and robustness of model decisions.
arXiv Detail & Related papers (2023-09-25T21:28:14Z) - SurGNN: Explainable visual scene understanding and assessment of
surgical skill using graph neural networks [19.57785997767885]
This paper explores how graph neural networks (GNNs) can be used to enhance visual scene understanding and surgical skill assessment.
GNNs provide interpretable results, revealing the specific actions, instruments, or anatomical structures that contribute to the predicted skill metrics.
arXiv Detail & Related papers (2023-08-24T20:32:57Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - 4D-OR: Semantic Scene Graphs for OR Domain Modeling [72.1320671045942]
We propose using semantic scene graphs (SSG) to describe and summarize the surgical scene.
The nodes of the scene graphs represent different actors and objects in the room, such as medical staff, patients, and medical equipment.
We create the first publicly available 4D surgical SSG dataset, 4D-OR, containing ten simulated total knee replacement surgeries.
arXiv Detail & Related papers (2022-03-22T17:59:45Z) - Multimodal Semantic Scene Graphs for Holistic Modeling of Surgical
Procedures [70.69948035469467]
We take advantage of the latest computer vision methodologies for generating 3D graphs from camera views.
We then introduce the Multimodal Semantic Graph Scene (MSSG) which aims at providing unified symbolic and semantic representation of surgical procedures.
arXiv Detail & Related papers (2021-06-09T14:35:44Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Towards Generalizable Surgical Activity Recognition Using Spatial
Temporal Graph Convolutional Networks [0.40611352512781856]
We introduce a modality that is robust to scene variation, and that is able to infer part information such as orientational and relative spatial relationships.
The proposed modality is based on spatial temporal graph representations of surgical tools in videos, for surgical activity recognition.
arXiv Detail & Related papers (2020-01-11T09:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.