Learning Motion Flows for Semi-supervised Instrument Segmentation from
Robotic Surgical Video
- URL: http://arxiv.org/abs/2007.02501v1
- Date: Mon, 6 Jul 2020 02:39:32 GMT
- Title: Learning Motion Flows for Semi-supervised Instrument Segmentation from
Robotic Surgical Video
- Authors: Zixu Zhao, Yueming Jin, Xiaojie Gao, Qi Dou, Pheng-Ann Heng
- Abstract summary: We study the semi-supervised instrument segmentation from robotic surgical videos with sparse annotations.
By exploiting generated data pairs, our framework can recover and even enhance temporal consistency of training sequences.
Results show that our method outperforms the state-of-the-art semisupervised methods by a large margin.
- Score: 64.44583693846751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Performing low hertz labeling for surgical videos at intervals can greatly
releases the burden of surgeons. In this paper, we study the semi-supervised
instrument segmentation from robotic surgical videos with sparse annotations.
Unlike most previous methods using unlabeled frames individually, we propose a
dual motion based method to wisely learn motion flows for segmentation
enhancement by leveraging temporal dynamics. We firstly design a flow predictor
to derive the motion for jointly propagating the frame-label pairs given the
current labeled frame. Considering the fast instrument motion, we further
introduce a flow compensator to estimate intermediate motion within continuous
frames, with a novel cycle learning strategy. By exploiting generated data
pairs, our framework can recover and even enhance temporal consistency of
training sequences to benefit segmentation. We validate our framework with
binary, part, and type tasks on 2017 MICCAI EndoVis Robotic Instrument
Segmentation Challenge dataset. Results show that our method outperforms the
state-of-the-art semi-supervised methods by a large margin, and even exceeds
fully supervised training on two tasks.
Related papers
- WeakSurg: Weakly supervised surgical instrument segmentation using temporal equivariance and semantic continuity [14.448593791011204]
We propose a weakly supervised surgical instrument segmentation with only instrument presence labels.
We take the inherent temporal attributes of surgical video into account and extend a two-stage weakly supervised segmentation paradigm.
Experiments are validated on two surgical video datasets, including one cholecystectomy surgery benchmark and one real robotic left lateral segment liver surgery dataset.
arXiv Detail & Related papers (2024-03-14T16:39:11Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - FUN-SIS: a Fully UNsupervised approach for Surgical Instrument
Segmentation [16.881624842773604]
We present FUN-SIS, a Fully-supervised approach for binary Surgical Instrument.
We train a per-frame segmentation model on completely unlabelled endoscopic videos, by relying on implicit motion information and instrument shape-priors.
The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2022-02-16T15:32:02Z) - Efficient Global-Local Memory for Real-time Instrument Segmentation of
Robotic Surgical Video [53.14186293442669]
We identify two important clues for surgical instrument perception, including local temporal dependency from adjacent frames and global semantic correlation in long-range duration.
We propose a novel dual-memory network (DMNet) to relate both global and local-temporal knowledge.
Our method largely outperforms the state-of-the-art works on segmentation accuracy while maintaining a real-time speed.
arXiv Detail & Related papers (2021-09-28T10:10:14Z) - Self-supervised Video Object Segmentation by Motion Grouping [79.13206959575228]
We develop a computer vision system able to segment objects by exploiting motion cues.
We introduce a simple variant of the Transformer to segment optical flow frames into primary objects and the background.
We evaluate the proposed architecture on public benchmarks (DAVIS2016, SegTrackv2, and FBMS59)
arXiv Detail & Related papers (2021-04-15T17:59:32Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z) - Unsupervised Surgical Instrument Segmentation via Anchor Generation and
Semantic Diffusion [17.59426327108382]
A more affordable unsupervised approach is developed in this paper.
In the experiments on the 2017 MII EndoVis Robotic Instrument Challenge dataset, the proposed method achieves 0.71 IoU and 0.81 Dice score without using a single manual annotation.
arXiv Detail & Related papers (2020-08-27T06:54:27Z) - Self-supervised Sparse to Dense Motion Segmentation [13.888344214818737]
We propose a self supervised method to learn the densification of sparse motion segmentations from single video frames.
We evaluate our method on the well-known motion segmentation datasets FBMS59 and DAVIS16.
arXiv Detail & Related papers (2020-08-18T11:40:18Z) - Motion-supervised Co-Part Segmentation [88.40393225577088]
We propose a self-supervised deep learning method for co-part segmentation.
Our approach develops the idea that motion information inferred from videos can be leveraged to discover meaningful object parts.
arXiv Detail & Related papers (2020-04-07T09:56:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.