Exploring Weakly Supervised Semantic Segmentation Ensembles for Medical
Imaging Systems
- URL: http://arxiv.org/abs/2303.07896v2
- Date: Thu, 16 Mar 2023 08:09:58 GMT
- Title: Exploring Weakly Supervised Semantic Segmentation Ensembles for Medical
Imaging Systems
- Authors: Erik Ostrowski and Bharath Srinivas Prabakaran and Muhammad Shafique
- Abstract summary: We propose a framework for reliable classification and detection of medical conditions in images.
Our framework achieves that by first utilizing lower threshold CAMs to cover the target object with high certainty.
We have demonstrated an improved dice score of up to 8% on BRATS and 6% on DECATHLON datasets.
- Score: 11.693197342734152
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliable classification and detection of certain medical conditions, in
images, with state-of-the-art semantic segmentation networks, require vast
amounts of pixel-wise annotation. However, the public availability of such
datasets is minimal. Therefore, semantic segmentation with image-level labels
presents a promising alternative to this problem. Nevertheless, very few works
have focused on evaluating this technique and its applicability to the medical
sector. Due to their complexity and the small number of training examples in
medical datasets, classifier-based weakly supervised networks like class
activation maps (CAMs) struggle to extract useful information from them.
However, most state-of-the-art approaches rely on them to achieve their
improvements. Therefore, we propose a framework that can still utilize the
low-quality CAM predictions of complicated datasets to improve the accuracy of
our results. Our framework achieves that by first utilizing lower threshold
CAMs to cover the target object with high certainty; second, by combining
multiple low-threshold CAMs that even out their errors while highlighting the
target object. We performed exhaustive experiments on the popular multi-modal
BRATS and prostate DECATHLON segmentation challenge datasets. Using the
proposed framework, we have demonstrated an improved dice score of up to 8% on
BRATS and 6% on DECATHLON datasets compared to the previous state-of-the-art.
Related papers
- MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Semi-Supervised Semantic Segmentation of Vessel Images using Leaking
Perturbations [1.5791732557395552]
Leaking GAN is a GAN-based semi-supervised architecture for retina vessel semantic segmentation.
Our key idea is to pollute the discriminator by leaking information from the generator.
This leads to more moderate generations that benefit the training of GAN.
arXiv Detail & Related papers (2021-10-22T18:25:08Z) - Few-shot segmentation of medical images based on meta-learning with
implicit gradients [0.48861336570452174]
We propose to exploit an optimization-based implicit model agnostic meta-learning iMAML algorithm in a few-shot setting for medical image segmentation.
Our approach can leverage the learned weights from a diverse set of training samples and can be deployed on a new unseen dataset.
arXiv Detail & Related papers (2021-06-06T19:52:06Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from
Medical Images Using Deep Learning [15.01235930304888]
We propose a novel deep learning-based interactive segmentation method that has high efficiency due to only requiring clicks as user inputs.
Our proposed framework achieves accurate results with fewer user interactions and less time compared with state-of-the-art interactive frameworks.
arXiv Detail & Related papers (2021-04-25T14:15:17Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Towards Unbiased COVID-19 Lesion Localisation and Segmentation via
Weakly Supervised Learning [66.36706284671291]
We propose a data-driven framework supervised by only image-level labels to support unbiased lesion localisation.
The framework can explicitly separate potential lesions from original images, with the help of a generative adversarial network and a lesion-specific decoder.
arXiv Detail & Related papers (2021-03-01T06:05:49Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.