FUN-SIS: a Fully UNsupervised approach for Surgical Instrument
Segmentation
- URL: http://arxiv.org/abs/2202.08141v1
- Date: Wed, 16 Feb 2022 15:32:02 GMT
- Title: FUN-SIS: a Fully UNsupervised approach for Surgical Instrument
Segmentation
- Authors: Luca Sestini, Benoit Rosa, Elena De Momi, Giancarlo Ferrigno, Nicolas
Padoy
- Abstract summary: We present FUN-SIS, a Fully-supervised approach for binary Surgical Instrument.
We train a per-frame segmentation model on completely unlabelled endoscopic videos, by relying on implicit motion information and instrument shape-priors.
The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches.
- Score: 16.881624842773604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic surgical instrument segmentation of endoscopic images is a crucial
building block of many computer-assistance applications for minimally invasive
surgery. So far, state-of-the-art approaches completely rely on the
availability of a ground-truth supervision signal, obtained via manual
annotation, thus expensive to collect at large scale. In this paper, we present
FUN-SIS, a Fully-UNsupervised approach for binary Surgical Instrument
Segmentation. FUN-SIS trains a per-frame segmentation model on completely
unlabelled endoscopic videos, by solely relying on implicit motion information
and instrument shape-priors. We define shape-priors as realistic segmentation
masks of the instruments, not necessarily coming from the same dataset/domain
as the videos. The shape-priors can be collected in various and convenient
ways, such as recycling existing annotations from other datasets. We leverage
them as part of a novel generative-adversarial approach, allowing to perform
unsupervised instrument segmentation of optical-flow images during training. We
then use the obtained instrument masks as pseudo-labels in order to train a
per-frame segmentation model; to this aim, we develop a
learning-from-noisy-labels architecture, designed to extract a clean
supervision signal from these pseudo-labels, leveraging their peculiar noise
properties. We validate the proposed contributions on three surgical datasets,
including the MICCAI 2017 EndoVis Robotic Instrument Segmentation Challenge
dataset. The obtained fully-unsupervised results for surgical instrument
segmentation are almost on par with the ones of fully-supervised
state-of-the-art approaches. This suggests the tremendous potential of the
proposed method to leverage the great amount of unlabelled data produced in the
context of minimally invasive surgery.
Related papers
- AMNCutter: Affinity-Attention-Guided Multi-View Normalized Cutter for Unsupervised Surgical Instrument Segmentation [7.594796294925481]
We propose a label-free unsupervised model featuring a novel module named Multi-View Normalized Cutter (m-NCutter)
Our model is trained using a graph-cutting loss function that leverages patch affinities for supervision, eliminating the need for pseudo-labels.
We conduct comprehensive experiments across multiple SIS datasets to validate our approach's state-of-the-art (SOTA) performance, robustness, and exceptional potential as a pre-trained model.
arXiv Detail & Related papers (2024-11-06T06:33:55Z) - Generalizing Segmentation Foundation Model Under Sim-to-real Domain-shift for Guidewire Segmentation in X-ray Fluoroscopy [1.4353812560047192]
Sim-to-real domain adaptation approaches utilize synthetic data from simulations, offering a cost-effective solution.
We propose a strategy to adapt SAM to X-ray fluoroscopy guidewire segmentation without any annotation on the target domain.
Our method surpasses both pre-trained SAM and many state-of-the-art domain adaptation techniques by a large margin.
arXiv Detail & Related papers (2024-10-09T21:59:48Z) - Amodal Segmentation for Laparoscopic Surgery Video Instruments [30.39518393494816]
We introduce AmodalVis to the realm of surgical instruments in the medical field.
This technique identifies both the visible and occluded parts of an object.
To achieve this, we introduce a new Amoal Instruments dataset.
arXiv Detail & Related papers (2024-08-02T07:40:34Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - SAF-IS: a Spatial Annotation Free Framework for Instance Segmentation of
Surgical Tools [10.295921059528636]
We develop a framework for instance segmentation not relying on spatial annotations for training.
Our solution only requires binary tool masks, obtainable using recent unsupervised approaches, and binary tool presence labels.
We validate our framework on the EndoVis 2017 and 2018 segmentation datasets.
arXiv Detail & Related papers (2023-09-04T17:13:06Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - TraSeTR: Track-to-Segment Transformer with Contrastive Query for
Instance-level Instrument Segmentation in Robotic Surgery [60.439434751619736]
We propose TraSeTR, a Track-to-Segment Transformer that exploits tracking cues to assist surgical instrument segmentation.
TraSeTR jointly reasons about the instrument type, location, and identity with instance-level predictions.
The effectiveness of our method is demonstrated with state-of-the-art instrument type segmentation results on three public datasets.
arXiv Detail & Related papers (2022-02-17T05:52:18Z) - Co-Generation and Segmentation for Generalized Surgical Instrument
Segmentation on Unlabelled Data [49.419268399590045]
Surgical instrument segmentation for robot-assisted surgery is needed for accurate instrument tracking and augmented reality overlays.
Deep learning-based methods have shown state-of-the-art performance for surgical instrument segmentation, but their results depend on labelled data.
In this paper, we demonstrate the limited generalizability of these methods on different datasets, including human robot-assisted surgeries.
arXiv Detail & Related papers (2021-03-16T18:41:18Z) - Unsupervised Surgical Instrument Segmentation via Anchor Generation and
Semantic Diffusion [17.59426327108382]
A more affordable unsupervised approach is developed in this paper.
In the experiments on the 2017 MII EndoVis Robotic Instrument Challenge dataset, the proposed method achieves 0.71 IoU and 0.81 Dice score without using a single manual annotation.
arXiv Detail & Related papers (2020-08-27T06:54:27Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Learning Motion Flows for Semi-supervised Instrument Segmentation from
Robotic Surgical Video [64.44583693846751]
We study the semi-supervised instrument segmentation from robotic surgical videos with sparse annotations.
By exploiting generated data pairs, our framework can recover and even enhance temporal consistency of training sequences.
Results show that our method outperforms the state-of-the-art semisupervised methods by a large margin.
arXiv Detail & Related papers (2020-07-06T02:39:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.