Neural Style Transfer and Unpaired Image-to-Image Translation to deal
with the Domain Shift Problem on Spheroid Segmentation
- URL: http://arxiv.org/abs/2112.09043v1
- Date: Thu, 16 Dec 2021 17:34:45 GMT
- Title: Neural Style Transfer and Unpaired Image-to-Image Translation to deal
with the Domain Shift Problem on Spheroid Segmentation
- Authors: Manuel Garc\'ia-Dom\'inguez and C\'esar Dom\'inguez and J\'onathan
Heras and Eloy Mata and Vico Pascual
- Abstract summary: Domain shift is a generalisation problem of machine learning models that occurs when the data distribution of the training set is different to the data distribution encountered by the model when it is deployed.
This is common in the context of biomedical image segmentation due to the variance of experimental conditions, equipment, and capturing settings.
We have illustrated the domain shift problem in the context of spheroid segmentation with 4 deep learning segmentation models that achieved an IoU over 97% when tested with images following the training distribution, but whose performance decreased up to an 84% when applied to images captured under different conditions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background and objectives. Domain shift is a generalisation problem of
machine learning models that occurs when the data distribution of the training
set is different to the data distribution encountered by the model when it is
deployed. This is common in the context of biomedical image segmentation due to
the variance of experimental conditions, equipment, and capturing settings. In
this work, we address this challenge by studying both neural style transfer
algorithms and unpaired image-to-image translation methods in the context of
the segmentation of tumour spheroids.
Methods. We have illustrated the domain shift problem in the context of
spheroid segmentation with 4 deep learning segmentation models that achieved an
IoU over 97% when tested with images following the training distribution, but
whose performance decreased up to an 84\% when applied to images captured under
different conditions. In order to deal with this problem, we have explored 3
style transfer algorithms (NST, deep image analogy, and STROTSS), and 6
unpaired image-to-image translations algorithms (CycleGAN, DualGAN, ForkGAN,
GANILLA, CUT, and FastCUT). These algorithms have been integrated into a
high-level API that facilitates their application to other contexts where the
domain-shift problem occurs.
Results. We have considerably improved the performance of the 4 segmentation
models when applied to images captured under different conditions by using both
style transfer and image-to-image translation algorithms. In particular, there
are 2 style transfer algorithms (NST and deep image analogy) and 1 unpaired
image-to-image translations algorithm (CycleGAN) that improve the IoU of the
models in a range from 0.24 to 76.07. Therefore, reaching a similar performance
to the one obtained with the models are applied to images following the
training distribution.
Related papers
- Anatomical Conditioning for Contrastive Unpaired Image-to-Image Translation of Optical Coherence Tomography Images [0.0]
We study the problem employing an optical coherence tomography ( OCT) data set of Spectralis- OCT and Home- OCT images.
I2I translation is challenging because the images are unpaired.
Our approach increases the similarity between the style-translated images and the target distribution.
arXiv Detail & Related papers (2024-04-08T11:20:28Z) - ACE: Zero-Shot Image to Image Translation via Pretrained
Auto-Contrastive-Encoder [2.1874189959020427]
We propose a new approach to extract image features by learning the similarities and differences of samples within the same data distribution.
The design of ACE enables us to achieve zero-shot image-to-image translation with no training on image translation tasks for the first time.
Our model achieves competitive results on multimodal image translation tasks with zero-shot learning as well.
arXiv Detail & Related papers (2023-02-22T23:52:23Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - DSP: Dual Soft-Paste for Unsupervised Domain Adaptive Semantic
Segmentation [97.74059510314554]
Unsupervised domain adaptation (UDA) for semantic segmentation aims to adapt a segmentation model trained on the labeled source domain to the unlabeled target domain.
Existing methods try to learn domain invariant features while suffering from large domain gaps.
We propose a novel Dual Soft-Paste (DSP) method in this paper.
arXiv Detail & Related papers (2021-07-20T16:22:40Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - GAIT: Gradient Adjusted Unsupervised Image-to-Image Translation [5.076419064097734]
An adversarial loss is utilized to match the distributions of the translated and target image sets.
This may create artifacts if two domains have different marginal distributions, for example, in uniform areas.
We propose an unsupervised IIT that preserves the uniform regions after the translation.
arXiv Detail & Related papers (2020-09-02T08:04:00Z) - Radon cumulative distribution transform subspace modeling for image
classification [18.709734704950804]
We present a new supervised image classification method applicable to a broad class of image deformation models.
The method makes use of the previously described Radon Cumulative Distribution Transform (R-CDT) for image data.
In addition to the test accuracy performances, we show improvements in terms of computational efficiency.
arXiv Detail & Related papers (2020-04-07T19:47:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.