ClamNet: Using contrastive learning with variable depth Unets for
medical image segmentation
- URL: http://arxiv.org/abs/2206.05225v1
- Date: Fri, 10 Jun 2022 16:55:45 GMT
- Title: ClamNet: Using contrastive learning with variable depth Unets for
medical image segmentation
- Authors: Samayan Bhattacharya, Sk Shahnawaz, Avigyan Bhattacharya
- Abstract summary: Unets have become the standard method for semantic segmentation of medical images, along with fully convolutional networks (FCN)
Unet++ was introduced as a variant of Unet, in order to solve some of the problems facing Unet and FCNs.
We use contrastive learning to train Unet++ for semantic segmentation of medical images using medical images from various sources.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unets have become the standard method for semantic segmentation of medical
images, along with fully convolutional networks (FCN). Unet++ was introduced as
a variant of Unet, in order to solve some of the problems facing Unet and FCNs.
Unet++ provided networks with an ensemble of variable depth Unets, hence
eliminating the need for professionals estimating the best suitable depth for a
task. While Unet and all its variants, including Unet++ aimed at providing
networks that were able to train well without requiring large quantities of
annotated data, none of them attempted to eliminate the need for pixel-wise
annotated data altogether. Obtaining such data for each disease to be diagnosed
comes at a high cost. Hence such data is scarce. In this paper we use
contrastive learning to train Unet++ for semantic segmentation of medical
images using medical images from various sources including magnetic resonance
imaging (MRI) and computed tomography (CT), without the need for pixel-wise
annotations. Here we describe the architecture of the proposed model and the
training method used. This is still a work in progress and so we abstain from
including results in this paper. The results and the trained model would be
made available upon publication or in subsequent versions of this paper on
arxiv.
Related papers
- Connecting the Dots: Graph Neural Network Powered Ensemble and
Classification of Medical Images [0.0]
Deep learning for medical imaging is limited due to the requirement for large amounts of training data.
We employ the Image Foresting Transform to optimally segment images into superpixels.
These superpixels are subsequently transformed into graph-structured data, enabling the proficient extraction of features and modeling of relationships.
arXiv Detail & Related papers (2023-11-13T13:20:54Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - MIPR:Automatic Annotation of Medical Images with Pixel Rearrangement [7.39560318487728]
We pro?pose a novel approach to solve the lack of annotated data from another angle, called medical image pixel rearrangement (short in MIPR)
The MIPR combines image-editing and pseudo-label technology to obtain labeled data.
Experiments on the ISIC18 show that the effect of the data annotated by our method for segmentation task is is equal to or even better than that of doctors annotations.
arXiv Detail & Related papers (2022-04-22T05:54:14Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Duo-SegNet: Adversarial Dual-Views for Semi-Supervised Medical Image
Segmentation [14.535295064959746]
We propose a semi-supervised image segmentation technique based on the concept of multi-view learning.
Our proposed method outperforms state-of-the-art medical image segmentation algorithms consistently and comfortably.
arXiv Detail & Related papers (2021-08-25T10:16:12Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Reducing Labelled Data Requirement for Pneumonia Segmentation using
Image Augmentations [0.0]
We investigate the effect of image augmentations on reducing the requirement of labelled data in semantic segmentation of chest X-rays for pneumonia detection.
We train fully convolutional network models on subsets of different sizes from the total training data.
We find that rotate and mixup are the best augmentations amongst rotate, mixup, translate, gamma and horizontal flip, wherein they reduce the labelled data requirement by 70%.
arXiv Detail & Related papers (2021-02-25T10:11:30Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.