Not End-to-End: Explore Multi-Stage Architecture for Online Surgical
Phase Recognition
- URL: http://arxiv.org/abs/2107.04810v1
- Date: Sat, 10 Jul 2021 11:00:38 GMT
- Title: Not End-to-End: Explore Multi-Stage Architecture for Online Surgical
Phase Recognition
- Authors: Fangqiu Yi and Tingting Jiang
- Abstract summary: We propose a new non end-to-end training strategy for surgical phase recognition task.
For the non end-to-end training strategy, the refinement stage is trained separately with proposed two types of disturbed sequences.
We evaluate three different choices of refinement models to show that our analysis and solution are robust to the choices of specific multi-stage models.
- Score: 11.234115388848284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Surgical phase recognition is of particular interest to computer assisted
surgery systems, in which the goal is to predict what phase is occurring at
each frame for a surgery video. Networks with multi-stage architecture have
been widely applied in many computer vision tasks with rich patterns, where a
predictor stage first outputs initial predictions and an additional refinement
stage operates on the initial predictions to perform further refinement.
Existing works show that surgical video contents are well ordered and contain
rich temporal patterns, making the multi-stage architecture well suited for the
surgical phase recognition task. However, we observe that when simply applying
the multi-stage architecture to the surgical phase recognition task, the
end-to-end training manner will make the refinement ability fall short of its
wishes. To address the problem, we propose a new non end-to-end training
strategy and explore different designs of multi-stage architecture for surgical
phase recognition task. For the non end-to-end training strategy, the
refinement stage is trained separately with proposed two types of disturbed
sequences. Meanwhile, we evaluate three different choices of refinement models
to show that our analysis and solution are robust to the choices of specific
multi-stage models. We conduct experiments on two public benchmarks, the
M2CAI16 Workflow Challenge, and the Cholec80 dataset. Results show that
multi-stage architecture trained with our strategy largely boosts the
performance of the current state-of-the-art single-stage model. Code is
available at \url{https://github.com/ChinaYi/casual_tcn}.
Related papers
- Multi-view Video-Pose Pretraining for Operating Room Surgical Activity Recognition [5.787586057526269]
Surgical activity recognition is a key computer vision task that detects activities or phases from multi-view camera recordings.
Existing SAR models often fail to account for fine-grained clinician movements and multi-view knowledge.
We propose a novel calibration-free multi-view multi-modal pretraining framework called Multiview Pretraining for Video-Pose Surgical Activity Recognition PreViPS.
arXiv Detail & Related papers (2025-02-19T17:08:04Z) - CPath-Omni: A Unified Multimodal Foundation Model for Patch and Whole Slide Image Analysis in Computational Pathology [17.781388341968967]
CPath- Omni is the first LMM designed to unify both patch and WSI level image analysis.
CPath- Omni achieves state-of-the-art (SOTA) performance across seven diverse tasks on 39 out of 42 datasets.
CPath-CLIP, for the first time, integrates different vision models and incorporates a large language model as a text encoder to build a more powerful CLIP model.
arXiv Detail & Related papers (2024-12-16T18:46:58Z) - SurgPETL: Parameter-Efficient Image-to-Surgical-Video Transfer Learning for Surgical Phase Recognition [9.675072799670458]
"Image pre-training followed by video fine-tuning" for high-dimensional video data poses significant performance bottlenecks.
In this paper, we develop a parameter-efficient transfer learning benchmark SurgPETL for surgical phase recognition.
We conduct extensive experiments with three advanced methods based on ViTs of two distinct scales pre-trained on five large-scale natural and medical datasets.
arXiv Detail & Related papers (2024-09-30T08:33:50Z) - PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Reusable Architecture Growth for Continual Stereo Matching [92.36221737921274]
We introduce a Reusable Architecture Growth (RAG) framework to learn new scenes continually in both supervised and self-supervised manners.
RAG can maintain high reusability during growth by reusing previous units while obtaining good performance.
We also present a Scene Router module to adaptively select the scene-specific architecture path at inference.
arXiv Detail & Related papers (2024-03-30T13:24:58Z) - Pixel-Wise Recognition for Holistic Surgical Scene Understanding [33.40319680006502]
This paper presents the Holistic and Multi-Granular Surgical Scene Understanding of Prostatectomies dataset.
Our benchmark models surgical scene understanding as a hierarchy of complementary tasks with varying levels of granularity.
To exploit our proposed benchmark, we introduce the Transformers for Actions, Phases, Steps, and Instrument (TAPIS) model.
arXiv Detail & Related papers (2024-01-20T09:09:52Z) - MS-TCN++: Multi-Stage Temporal Convolutional Network for Action
Segmentation [87.16030562892537]
We propose a multi-stage architecture for the temporal action segmentation task.
The first stage generates an initial prediction that is refined by the next ones.
Our models achieve state-of-the-art results on three datasets.
arXiv Detail & Related papers (2020-06-16T14:50:47Z) - Deep Multimodal Neural Architecture Search [178.35131768344246]
We devise a generalized deep multimodal neural architecture search (MMnas) framework for various multimodal learning tasks.
Given multimodal input, we first define a set of primitive operations, and then construct a deep encoder-decoder based unified backbone.
On top of the unified backbone, we attach task-specific heads to tackle different multimodal learning tasks.
arXiv Detail & Related papers (2020-04-25T07:00:32Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.