Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks
- URL: http://arxiv.org/abs/2007.04505v1
- Date: Thu, 9 Jul 2020 01:39:39 GMT
- Title: Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks
- Authors: Daniil Pakhomov, Wei Shen, Nassir Navab
- Abstract summary: We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
- Score: 54.00217496410142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Surgical tool segmentation in endoscopic images is an important problem: it
is a crucial step towards full instrument pose estimation and it is used for
integration of pre- and intra-operative images into the endoscopic view. While
many recent approaches based on convolutional neural networks have shown great
results, a key barrier to progress lies in the acquisition of a large number of
manually-annotated images which is necessary for an algorithm to generalize and
work well in diverse surgical scenarios. Unlike the surgical image data itself,
annotations are difficult to acquire and may be of variable quality. On the
other hand, synthetic annotations can be automatically generated by using
forward kinematic model of the robot and CAD models of tools by projecting them
onto an image plane. Unfortunately, this model is very inaccurate and cannot be
used for supervised learning of image segmentation models. Since generated
annotations will not directly correspond to endoscopic images due to errors, we
formulate the problem as an unpaired image-to-image translation where the goal
is to learn the mapping between an input endoscopic image and a corresponding
annotation using an adversarial model. Our approach allows to train image
segmentation models without the need to acquire expensive annotations and can
potentially exploit large unlabeled endoscopic image collection outside the
annotated distributions of image/annotation data. We test our proposed method
on Endovis 2017 challenge dataset and show that it is competitive with
supervised segmentation methods.
Related papers
- UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation [64.01742988773745]
An increasing privacy concern exists regarding training large-scale image segmentation models on unauthorized private data.
We exploit the concept of unlearnable examples to make images unusable to model training by generating and adding unlearnable noise into the original images.
We empirically verify the effectiveness of UnSeg across 6 mainstream image segmentation tasks, 10 widely used datasets, and 7 different network architectures.
arXiv Detail & Related papers (2024-10-13T16:34:46Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Visual-Kinematics Graph Learning for Procedure-agnostic Instrument Tip
Segmentation in Robotic Surgeries [29.201385352740555]
We propose a novel visual-kinematics graph learning framework to accurately segment the instrument tip given various surgical procedures.
Specifically, a graph learning framework is proposed to encode relational features of instrument parts from both image and kinematics.
A cross-modal contrastive loss is designed to incorporate robust geometric prior from kinematics to image for tip segmentation.
arXiv Detail & Related papers (2023-09-02T14:52:58Z) - Learning to Annotate Part Segmentation with Gradient Matching [58.100715754135685]
This paper focuses on tackling semi-supervised part segmentation tasks by generating high-quality images with a pre-trained GAN.
In particular, we formulate the annotator learning as a learning-to-learn problem.
We show that our method can learn annotators from a broad range of labelled images including real images, generated images, and even analytically rendered images.
arXiv Detail & Related papers (2022-11-06T01:29:22Z) - Min-Max Similarity: A Contrastive Learning Based Semi-Supervised
Learning Network for Surgical Tools Segmentation [0.0]
We propose a semi-supervised segmentation network based on contrastive learning.
In contrast to the previous state-of-the-art, we introduce a contrastive learning form of dual-view training.
Our proposed method outperforms state-of-the-art semi-supervised and fully supervised segmentation algorithms consistently.
arXiv Detail & Related papers (2022-03-29T01:40:26Z) - Reducing Annotating Load: Active Learning with Synthetic Images in
Surgical Instrument Segmentation [11.705954708866079]
instrument segmentation in endoscopic vision of robot-assisted surgery is challenging due to reflection on the instruments and frequent contacts with tissue.
Deep neural networks (DNN) show competitive performance and are in favor in recent years.
Motivated by alleviating this workload, we propose a general embeddable method to decrease the usage of labeled real images.
arXiv Detail & Related papers (2021-08-07T22:30:53Z) - Controllable cardiac synthesis via disentangled anatomy arithmetic [15.351113774542839]
We propose a framework termed "disentangled anatomy arithmetic"
A generative model learns to combine anatomical factors of different input images with the desired imaging modality.
Our model is used to generate realistic images, pathology labels, and segmentation masks.
arXiv Detail & Related papers (2021-07-04T23:13:33Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Going to Extremes: Weakly Supervised Medical Image Segmentation [12.700841704699615]
We suggest using minimal user interaction in the form of extreme point clicks to train a segmentation model.
An initial segmentation is generated based on the extreme points utilizing the random walker algorithm.
This initial segmentation is then used as a noisy supervision signal to train a fully convolutional network.
arXiv Detail & Related papers (2020-09-25T00:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.