Data-Limited Tissue Segmentation using Inpainting-Based Self-Supervised
Learning
- URL: http://arxiv.org/abs/2210.07936v1
- Date: Fri, 14 Oct 2022 16:34:05 GMT
- Title: Data-Limited Tissue Segmentation using Inpainting-Based Self-Supervised
Learning
- Authors: Jeffrey Dominic, Nandita Bhaskhar, Arjun D. Desai, Andrew Schmidt,
Elka Rubin, Beliz Gunel, Garry E. Gold, Brian A. Hargreaves, Leon Lenchik,
Robert Boutin, Akshay S. Chaudhari
- Abstract summary: Self-supervised learning (SSL) methods involving pretext tasks have shown promise in overcoming this requirement by first pretraining models using unlabeled data.
We evaluate the efficacy of two SSL methods (inpainting-based pretext tasks of context prediction and context restoration) for CT and MRI image segmentation in label-limited scenarios.
We demonstrate that optimally trained and easy-to-implement SSL segmentation models can outperform classically supervised methods for MRI and CT tissue segmentation in label-limited scenarios.
- Score: 3.7931881761831328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although supervised learning has enabled high performance for image
segmentation, it requires a large amount of labeled training data, which can be
difficult to obtain in the medical imaging field. Self-supervised learning
(SSL) methods involving pretext tasks have shown promise in overcoming this
requirement by first pretraining models using unlabeled data. In this work, we
evaluate the efficacy of two SSL methods (inpainting-based pretext tasks of
context prediction and context restoration) for CT and MRI image segmentation
in label-limited scenarios, and investigate the effect of implementation design
choices for SSL on downstream segmentation performance. We demonstrate that
optimally trained and easy-to-implement inpainting-based SSL segmentation
models can outperform classically supervised methods for MRI and CT tissue
segmentation in label-limited scenarios, for both clinically-relevant metrics
and the traditional Dice score.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Cross Prompting Consistency with Segment Anything Model for Semi-supervised Medical Image Segmentation [44.54301473673582]
Semi-supervised learning (SSL) has achieved notable progress in medical image segmentation.
Recent developments in visual foundation models, such as the Segment Anything Model (SAM), have demonstrated remarkable adaptability.
We propose a cross-prompting consistency method with segment anything model (CPC-SAM) for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2024-07-07T15:43:20Z) - Self-supervised learning for skin cancer diagnosis with limited training data [0.196629787330046]
Self-supervised learning (SSL) is an alternative to the standard supervised pre-training on ImageNet for scenarios with limited training data.
We consider textitfurther SSL pre-training on task-specific datasets, where our implementation is motivated by supervised transfer learning.
We find minimal further SSL pre-training on task-specific data can be as effective as large-scale SSL pre-training on ImageNet for medical image classification tasks with limited labelled data.
arXiv Detail & Related papers (2024-01-01T08:11:38Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Benchmarking Self-Supervised Learning on Diverse Pathology Datasets [10.868779327544688]
Self-supervised learning has shown to be an effective method for utilizing unlabeled data.
We execute the largest-scale study of SSL pre-training on pathology image data.
For the first time, we apply SSL to the challenging task of nuclei instance segmentation.
arXiv Detail & Related papers (2022-12-09T06:38:34Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - A Simple Baseline for Zero-shot Semantic Segmentation with Pre-trained
Vision-language Model [61.58071099082296]
It is unclear how to make zero-shot recognition working well on broader vision problems, such as object detection and semantic segmentation.
In this paper, we target for zero-shot semantic segmentation, by building it on an off-the-shelf pre-trained vision-language model, i.e., CLIP.
Our experimental results show that this simple framework surpasses previous state-of-the-arts by a large margin.
arXiv Detail & Related papers (2021-12-29T18:56:18Z) - Medical Instrument Segmentation in 3D US by Hybrid Constrained
Semi-Supervised Learning [62.13520959168732]
We propose a semi-supervised learning framework for instrument segmentation in 3D US.
To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument.
Our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume.
arXiv Detail & Related papers (2021-07-30T07:59:45Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Contrastive learning of global and local features for medical image
segmentation with limited annotations [10.238403787504756]
A key requirement for the success of supervised deep learning is a large labeled dataset.
We propose strategies for extending the contrastive learning framework for segmentation of medical images in the semi-supervised setting.
In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques.
arXiv Detail & Related papers (2020-06-18T13:31:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.