Min-Max Similarity: A Contrastive Learning Based Semi-Supervised
Learning Network for Surgical Tools Segmentation
- URL: http://arxiv.org/abs/2203.15177v1
- Date: Tue, 29 Mar 2022 01:40:26 GMT
- Title: Min-Max Similarity: A Contrastive Learning Based Semi-Supervised
Learning Network for Surgical Tools Segmentation
- Authors: Ange Lou, Xing Yao, Ziteng Liu and Jack Noble
- Abstract summary: We propose a semi-supervised segmentation network based on contrastive learning.
In contrast to the previous state-of-the-art, we introduce a contrastive learning form of dual-view training.
Our proposed method outperforms state-of-the-art semi-supervised and fully supervised segmentation algorithms consistently.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation of images is a popular topic in medical AI. This is mainly due
to the difficulty to obtain a significant number of pixel-level annotated data
to train a neural network. To address this issue, we proposed a semi-supervised
segmentation network based on contrastive learning. In contrast to the previous
state-of-the-art, we introduce a contrastive learning form of dual-view
training by employing classifiers and projectors to build all-negative, and
positive and negative feature pairs respectively to formulate the learning
problem as solving min-max similarity problem. The all-negative pairs are used
to supervise the networks learning from different views and make sure to
capture general features, and the consistency of unlabeled predictions is
measured by pixel-wise contrastive loss between positive and negative pairs. To
quantitative and qualitative evaluate our proposed method, we test it on two
public endoscopy surgical tool segmentation datasets and one cochlear implant
surgery dataset which we manually annotate the cochlear implant in surgical
videos. The segmentation performance (dice coefficients) indicates that our
proposed method outperforms state-of-the-art semi-supervised and fully
supervised segmentation algorithms consistently. The code is publicly available
at: https://github.com/AngeLouCN/Min_Max_Similarity
Related papers
- Revisiting Surgical Instrument Segmentation Without Human Intervention: A Graph Partitioning View [7.594796294925481]
We propose an unsupervised method by reframing the video frame segmentation as a graph partitioning problem.
A self-supervised pre-trained model is firstly leveraged as a feature extractor to capture high-level semantic features.
On the "deep" eigenvectors, a surgical video frame is meaningfully segmented into different modules like tools and tissues, providing distinguishable semantic information.
arXiv Detail & Related papers (2024-08-27T05:31:30Z) - SegMatch: A semi-supervised learning method for surgical instrument
segmentation [10.223709180135419]
We propose SegMatch, a semi supervised learning method to reduce the need for expensive annotation for laparoscopic and robotic surgical images.
SegMatch builds on FixMatch, a widespread semi supervised classification pipeline combining consistency regularization and pseudo labelling.
Our results demonstrate that adding unlabelled data for training purposes allows us to surpass the performance of fully supervised approaches.
arXiv Detail & Related papers (2023-08-09T21:30:18Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Learning to Annotate Part Segmentation with Gradient Matching [58.100715754135685]
This paper focuses on tackling semi-supervised part segmentation tasks by generating high-quality images with a pre-trained GAN.
In particular, we formulate the annotator learning as a learning-to-learn problem.
We show that our method can learn annotators from a broad range of labelled images including real images, generated images, and even analytically rendered images.
arXiv Detail & Related papers (2022-11-06T01:29:22Z) - Efficient Self-Supervision using Patch-based Contrastive Learning for
Histopathology Image Segmentation [0.456877715768796]
We propose a framework for self-supervised image segmentation using contrastive learning on image patches.
A fully convolutional neural network (FCNN) is trained in a self-supervised manner to discern features in the input images.
The proposed model only consists of a simple FCNN with 10.8k parameters and requires about 5 minutes to converge on the high resolution microscopy datasets.
arXiv Detail & Related papers (2022-08-23T07:24:47Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - Bootstrapping Semi-supervised Medical Image Segmentation with
Anatomical-aware Contrastive Distillation [10.877450596327407]
We present ACTION, an Anatomical-aware ConTrastive dIstillatiON framework, for semi-supervised medical image segmentation.
We first develop an iterative contrastive distillation algorithm by softly labeling the negatives rather than binary supervision between positive and negative pairs.
We also capture more semantically similar features from the randomly chosen negative set compared to the positives to enforce the diversity of the sampled data.
arXiv Detail & Related papers (2022-06-06T01:30:03Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Duo-SegNet: Adversarial Dual-Views for Semi-Supervised Medical Image
Segmentation [14.535295064959746]
We propose a semi-supervised image segmentation technique based on the concept of multi-view learning.
Our proposed method outperforms state-of-the-art medical image segmentation algorithms consistently and comfortably.
arXiv Detail & Related papers (2021-08-25T10:16:12Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.