Image Embedded Segmentation: Uniting Supervised and Unsupervised
Objectives for Segmenting Histopathological Images
- URL: http://arxiv.org/abs/2001.11202v3
- Date: Wed, 25 Nov 2020 11:18:20 GMT
- Title: Image Embedded Segmentation: Uniting Supervised and Unsupervised
Objectives for Segmenting Histopathological Images
- Authors: C. T. Sari, C. Sokmensuer, and C. Gunduz-Demir
- Abstract summary: This paper presents a new regularization method to train a fully convolutional network for semantic tissue segmentation.
It relies on the benefit of unsupervised learning, in the form of image reconstruction, for network training.
Our experiments demonstrate that it leads to better segmentation results in these datasets, compared to its counterparts.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a new regularization method to train a fully
convolutional network for semantic tissue segmentation in histopathological
images. This method relies on the benefit of unsupervised learning, in the form
of image reconstruction, for network training. To this end, it puts forward an
idea of defining a new embedding that allows uniting the main supervised task
of semantic segmentation and an auxiliary unsupervised task of image
reconstruction into a single one and proposes to learn this united task by a
single generative model. This embedding generates an output image by
superimposing an input image on its segmentation map. Then, the method learns
to translate the input image to this embedded output image using a conditional
generative adversarial network, which is known as quite effective for
image-to-image translations. This proposal is different than the existing
approach that uses image reconstruction for the same regularization purpose.
The existing approach considers segmentation and image reconstruction as two
separate tasks in a multi-task network, defines their losses independently, and
combines them in a joint loss function. However, the definition of such a
function requires externally determining right contributions of the supervised
and unsupervised losses that yield balanced learning between the segmentation
and image reconstruction tasks. The proposed approach provides an easier
solution to this problem by uniting these two tasks into a single one, which
intrinsically combines their losses. We test our approach on three datasets of
histopathological images. Our experiments demonstrate that it leads to better
segmentation results in these datasets, compared to its counterparts.
Related papers
- FuseNet: Self-Supervised Dual-Path Network for Medical Image
Segmentation [3.485615723221064]
FuseNet is a dual-stream framework for self-supervised semantic segmentation.
Cross-modal fusion technique extends the principles of CLIP by replacing textual data with augmented images.
experiments on skin lesion and lung segmentation datasets demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2023-11-22T00:03:16Z) - A Vessel-Segmentation-Based CycleGAN for Unpaired Multi-modal Retinal
Image Synthesis [11.225641274591101]
Unpaired image-to-image translation of retinal images can efficiently increase the training dataset for deep-learning-based retinal registration methods.
Our method integrates a vessel segmentation network into the image-to-image translation task by extending the CycleGAN framework.
arXiv Detail & Related papers (2023-06-05T14:06:43Z) - Revisiting Image Reconstruction for Semi-supervised Semantic
Segmentation [16.27277238968567]
We revisit the idea of using image reconstruction as an auxiliary task and incorporate it with a modern semi-supervised semantic segmentation framework.
Surprisingly, we discover that such an old idea in semi-supervised learning can produce results competitive with state-of-the-art semantic segmentation algorithms.
arXiv Detail & Related papers (2023-03-17T06:31:06Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Joint reconstruction-segmentation on graphs [0.7829352305480285]
We present a method for joint reconstruction-segmentation using graph-based segmentation methods.
Complications arise due to the large size of the matrices involved, and we show how these complications can be managed.
We apply this scheme to distorted versions of two cows'' images familiar from previous graph-based segmentation literature.
arXiv Detail & Related papers (2022-08-11T14:01:38Z) - Unsupervised Part Discovery from Contrastive Reconstruction [90.88501867321573]
The goal of self-supervised visual representation learning is to learn strong, transferable image representations.
We propose an unsupervised approach to object part discovery and segmentation.
Our method yields semantic parts consistent across fine-grained but visually distinct categories.
arXiv Detail & Related papers (2021-11-11T17:59:42Z) - Self-supervised Segmentation via Background Inpainting [96.10971980098196]
We introduce a self-supervised detection and segmentation approach that can work with single images captured by a potentially moving camera.
We exploit a self-supervised loss function that we exploit to train a proposal-based segmentation network.
We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks and outperform existing self-supervised methods.
arXiv Detail & Related papers (2020-11-11T08:34:40Z) - Unsupervised Learning of Image Segmentation Based on Differentiable
Feature Clustering [14.074732867392008]
The usage of convolutional neural networks (CNNs) for unsupervised image segmentation was investigated in this study.
We present a novel end-to-end network of unsupervised image segmentation that consists of normalization and an argmax function for differentiable clustering.
Third, we present an extension of the proposed method for segmentation with scribbles as user input, which showed better accuracy than existing methods.
arXiv Detail & Related papers (2020-07-20T10:28:36Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z) - Image-to-image Mapping with Many Domains by Sparse Attribute Transfer [71.28847881318013]
Unsupervised image-to-image translation consists of learning a pair of mappings between two domains without known pairwise correspondences between points.
Current convention is to approach this task with cycle-consistent GANs.
We propose an alternate approach that directly restricts the generator to performing a simple sparse transformation in a latent layer.
arXiv Detail & Related papers (2020-06-23T19:52:23Z) - Weakly-Supervised Semantic Segmentation by Iterative Affinity Learning [86.45526827323954]
Weakly-supervised semantic segmentation is a challenging task as no pixel-wise label information is provided for training.
We propose an iterative algorithm to learn such pairwise relations.
We show that the proposed algorithm performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2020-02-19T10:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.