SAF-IS: a Spatial Annotation Free Framework for Instance Segmentation of
Surgical Tools
- URL: http://arxiv.org/abs/2309.01723v1
- Date: Mon, 4 Sep 2023 17:13:06 GMT
- Title: SAF-IS: a Spatial Annotation Free Framework for Instance Segmentation of
Surgical Tools
- Authors: Luca Sestini, Benoit Rosa, Elena De Momi, Giancarlo Ferrigno, Nicolas
Padoy
- Abstract summary: We develop a framework for instance segmentation not relying on spatial annotations for training.
Our solution only requires binary tool masks, obtainable using recent unsupervised approaches, and binary tool presence labels.
We validate our framework on the EndoVis 2017 and 2018 segmentation datasets.
- Score: 10.295921059528636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instance segmentation of surgical instruments is a long-standing research
problem, crucial for the development of many applications for computer-assisted
surgery. This problem is commonly tackled via fully-supervised training of deep
learning models, requiring expensive pixel-level annotations to train. In this
work, we develop a framework for instance segmentation not relying on spatial
annotations for training. Instead, our solution only requires binary tool
masks, obtainable using recent unsupervised approaches, and binary tool
presence labels, freely obtainable in robot-assisted surgery. Based on the
binary mask information, our solution learns to extract individual tool
instances from single frames, and to encode each instance into a compact vector
representation, capturing its semantic features. Such representations guide the
automatic selection of a tiny number of instances (8 only in our experiments),
displayed to a human operator for tool-type labelling. The gathered information
is finally used to match each training instance with a binary tool presence
label, providing an effective supervision signal to train a tool instance
classifier. We validate our framework on the EndoVis 2017 and 2018 segmentation
datasets. We provide results using binary masks obtained either by manual
annotation or as predictions of an unsupervised binary segmentation model. The
latter solution yields an instance segmentation approach completely free from
spatial annotations, outperforming several state-of-the-art fully-supervised
segmentation approaches.
Related papers
- Revisiting Surgical Instrument Segmentation Without Human Intervention: A Graph Partitioning View [7.594796294925481]
We propose an unsupervised method by reframing the video frame segmentation as a graph partitioning problem.
A self-supervised pre-trained model is firstly leveraged as a feature extractor to capture high-level semantic features.
On the "deep" eigenvectors, a surgical video frame is meaningfully segmented into different modules like tools and tissues, providing distinguishable semantic information.
arXiv Detail & Related papers (2024-08-27T05:31:30Z) - Unsupervised Universal Image Segmentation [59.0383635597103]
We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
arXiv Detail & Related papers (2023-12-28T18:59:04Z) - PWISeg: Point-based Weakly-supervised Instance Segmentation for Surgical
Instruments [27.89003436883652]
We propose a weakly-supervised surgical instrument segmentation approach, named Point-based Weakly-supervised Instance (PWISeg)
PWISeg adopts an FCN-based architecture with point-to-box and point-to-mask branches to model the relationships between feature points and bounding boxes.
Based on this, we propose a key pixel association loss and a key pixel distribution loss, driving the point-to-mask branch to generate more accurate segmentation predictions.
arXiv Detail & Related papers (2023-11-16T11:48:29Z) - Synthetic Instance Segmentation from Semantic Image Segmentation Masks [15.477053085267404]
We propose a novel paradigm called Synthetic Instance (SISeg)
SISeg instance segmentation results by leveraging image masks generated by existing semantic segmentation models.
In other words, the proposed model does not need extra manpower or higher computational expenses.
arXiv Detail & Related papers (2023-08-02T05:13:02Z) - A Simple Framework for Open-Vocabulary Segmentation and Detection [85.21641508535679]
We present OpenSeeD, a simple Open-vocabulary and Detection framework that jointly learns from different segmentation and detection datasets.
We first introduce a pre-trained text encoder to encode all the visual concepts in two tasks and learn a common semantic space for them.
After pre-training, our model exhibits competitive or stronger zero-shot transferability for both segmentation and detection.
arXiv Detail & Related papers (2023-03-14T17:58:34Z) - Scribble-Supervised Medical Image Segmentation via Dual-Branch Network
and Dynamically Mixed Pseudo Labels Supervision [15.414578073908906]
We propose a simple yet efficient scribble-supervised image segmentation method and apply it to cardiac MRI segmentation.
By combining the scribble supervision and auxiliary pseudo labels supervision, the dual-branch network can efficiently learn from scribble annotations end-to-end.
arXiv Detail & Related papers (2022-03-04T02:50:30Z) - FreeSOLO: Learning to Segment Objects without Annotations [191.82134817449528]
We present FreeSOLO, a self-supervised instance segmentation framework built on top of the simple instance segmentation method SOLO.
Our method also presents a novel localization-aware pre-training framework, where objects can be discovered from complicated scenes in an unsupervised manner.
arXiv Detail & Related papers (2022-02-24T16:31:44Z) - TraSeTR: Track-to-Segment Transformer with Contrastive Query for
Instance-level Instrument Segmentation in Robotic Surgery [60.439434751619736]
We propose TraSeTR, a Track-to-Segment Transformer that exploits tracking cues to assist surgical instrument segmentation.
TraSeTR jointly reasons about the instrument type, location, and identity with instance-level predictions.
The effectiveness of our method is demonstrated with state-of-the-art instrument type segmentation results on three public datasets.
arXiv Detail & Related papers (2022-02-17T05:52:18Z) - FUN-SIS: a Fully UNsupervised approach for Surgical Instrument
Segmentation [16.881624842773604]
We present FUN-SIS, a Fully-supervised approach for binary Surgical Instrument.
We train a per-frame segmentation model on completely unlabelled endoscopic videos, by relying on implicit motion information and instrument shape-priors.
The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2022-02-16T15:32:02Z) - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals [78.12377360145078]
We introduce a novel two-step framework that adopts a predetermined prior in a contrastive optimization objective to learn pixel embeddings.
This marks a large deviation from existing works that relied on proxy tasks or end-to-end clustering.
In particular, when fine-tuning the learned representations using just 1% of labeled examples on PASCAL, we outperform supervised ImageNet pre-training by 7.1% mIoU.
arXiv Detail & Related papers (2021-02-11T18:54:47Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.