Blind stain separation using model-aware generative learning and its
applications on fluorescence microscopy images
- URL: http://arxiv.org/abs/2102.06802v1
- Date: Fri, 12 Feb 2021 22:39:39 GMT
- Title: Blind stain separation using model-aware generative learning and its
applications on fluorescence microscopy images
- Authors: Xingyu Li
- Abstract summary: Prior model-based stain separation methods rely on stains' spatial distributions over an image.
Deep generative models are used for this purpose.
In this study, a novel learning-based blind source separation framework is proposed.
- Score: 1.713291434132985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiple stains are usually used to highlight biological substances in
biomedical image analysis. To decompose multiple stains for co-localization
quantification, blind source separation is usually performed. Prior model-based
stain separation methods usually rely on stains' spatial distributions over an
image and may fail to solve the co-localization problem. With the advantage of
machine learning, deep generative models are used for this purpose. Since prior
knowledge of imaging models is ignored in purely data-driven solutions, these
methods may be sub-optimal. In this study, a novel learning-based blind source
separation framework is proposed, where the physical model of biomedical
imaging is incorporated to regularize the learning process. The introduced
model-relevant adversarial loss couples all generators in the framework and
limits the capacities of the generative models. Further more, a training
algorithm is innovated for the proposed framework to avoid inter-generator
confusion during learning. This paper particularly takes fluorescence unmixing
in fluorescence microscopy images as an application example of the proposed
framework. Qualitative and quantitative experimentation on a public
fluorescence microscopy image set demonstrates the superiority of the proposed
method over both prior model-based approaches and learning-based methods.
Related papers
- DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Mask-guided cross-image attention for zero-shot in-silico histopathologic image generation with a diffusion model [0.10910416614141322]
Diffusion models are the state-of-the-art solution for generating in-silico images.
Appearance transfer diffusion models are designed for natural images.
In computational pathology, specifically in oncology, it is not straightforward to define which objects in an image should be classified as foreground and background.
We contribute to the applicability of appearance transfer models to diffusion-stained images by modifying the appearance transfer guidance to alternate between class-specific AdaIN feature statistics matchings.
arXiv Detail & Related papers (2024-07-16T12:36:26Z) - Single Exposure Quantitative Phase Imaging with a Conventional Microscope using Diffusion Models [2.0760654993698426]
Transport-of-Intensity Equation (TIE) often requires multiple acquisitions at different defocus distances.
We propose to use chromatic aberrations to induce the required through-focus images with a single exposure.
Our contributions offer an alternative TIE approach that leverages chromatic aberrations, achieving accurate single-exposure phase measurement with white light.
arXiv Detail & Related papers (2024-06-06T15:44:24Z) - Multi-target stain normalization for histology slides [6.820595748010971]
We introduce a novel approach that leverages multiple reference images to enhance robustness against stain variation.
Our method is parameter-free and can be adopted in existing computational pathology pipelines with no significant changes.
arXiv Detail & Related papers (2024-06-04T07:57:34Z) - Detecting Images Generated by Deep Diffusion Models using their Local
Intrinsic Dimensionality [8.968599131722023]
Diffusion models have been successfully applied for the visual synthesis of strikingly realistic appearing images.
This raises strong concerns about their potential for malicious purposes.
We propose using the lightweight multi Local Intrinsic Dimensionality (multiLID) for the automatic detection of synthetic images.
arXiv Detail & Related papers (2023-07-05T15:03:10Z) - Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC [102.64648158034568]
diffusion models have quickly become the prevailing approach to generative modeling in many domains.
We propose an energy-based parameterization of diffusion models which enables the use of new compositional operators.
We find these samplers lead to notable improvements in compositional generation across a wide set of problems.
arXiv Detail & Related papers (2023-02-22T18:48:46Z) - Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle
Phenotypes [0.5076419064097732]
We present an improved CycleGAN architecture that employs self-supervised discriminators to alleviate the need for numerous images.
We also provide results obtained with small biological datasets on obvious and non-obvious cell phenotype variations.
arXiv Detail & Related papers (2023-01-21T16:25:04Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.