Ultra-high-resolution unpaired stain transformation via Kernelized
Instance Normalization
- URL: http://arxiv.org/abs/2208.10730v1
- Date: Tue, 23 Aug 2022 04:47:43 GMT
- Title: Ultra-high-resolution unpaired stain transformation via Kernelized
Instance Normalization
- Authors: Ming-Yang Ho, Min-Sheng Wu, and Che-Ming Wu
- Abstract summary: We propose a strategy for ultra-high-resolution unpaired image-to-image translation: Kernelized Instance Normalization (KIN)
KIN preserves local information and successfully achieves seamless stain transformation with constant GPU memory usage.
This is the first successful study for the ultra-high-resolution unpaired image-to-image translation with constant space complexity.
- Score: 1.2234742322758418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While hematoxylin and eosin (H&E) is a standard staining procedure,
immunohistochemistry (IHC) staining further serves as a diagnostic and
prognostic method. However, acquiring special staining results requires
substantial costs.
Hence, we proposed a strategy for ultra-high-resolution unpaired
image-to-image translation: Kernelized Instance Normalization (KIN), which
preserves local information and successfully achieves seamless stain
transformation with constant GPU memory usage. Given a patch, corresponding
position, and a kernel, KIN computes local statistics using convolution
operation. In addition, KIN can be easily plugged into most currently developed
frameworks without re-training.
We demonstrate that KIN achieves state-of-the-art stain transformation by
replacing instance normalization (IN) layers with KIN layers in three popular
frameworks and testing on two histopathological datasets. Furthermore, we
manifest the generalizability of KIN with high-resolution natural images.
Finally, human evaluation and several objective metrics are used to compare the
performance of different approaches.
Overall, this is the first successful study for the ultra-high-resolution
unpaired image-to-image translation with constant space complexity. Code is
available at: https://github.com/Kaminyou/URUST
Related papers
- Every Pixel Has its Moments: Ultra-High-Resolution Unpaired Image-to-Image Translation via Dense Normalization [4.349838917565205]
We introduce a Dense Normalization layer designed to estimate pixel-level statistical moments.
This approach effectively diminishes tiling artifacts while concurrently preserving local color and hue contrasts.
Our work paves the way for future exploration in handling images of arbitrary resolutions within the realm of unpaired image-to-image translation.
arXiv Detail & Related papers (2024-07-05T04:14:50Z) - Image-level Regression for Uncertainty-aware Retinal Image Segmentation [3.7141182051230914]
We introduce a novel Uncertainty-Aware (SAUNA) transform, which adds pixel uncertainty to the ground truth.
Our results indicate that the integration of the SAUNA transform and these segmentation losses led to significant performance boosts for different segmentation models.
arXiv Detail & Related papers (2024-05-27T04:17:10Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - AdaWCT: Adaptive Whitening and Coloring Style Injection [55.554986498301574]
We present a generalization of AdaIN which relies on the whitening and coloring transformation (WCT) which we dub AdaWCT, that we apply for style injection in large GANs.
We show, through experiments on the StarGANv2 architecture, that this generalization, albeit conceptually simple, results in significant improvements in the quality of the generated images.
arXiv Detail & Related papers (2022-08-01T15:07:51Z) - RandStainNA: Learning Stain-Agnostic Features from Histology Slides by
Bridging Stain Augmentation and Normalization [45.81689497433507]
Two proposals, namely stain normalization (SN) and stain augmentation (SA), have been spotlighted to reduce the generalization error.
To address the problems, we unify SN and SA with a novel RandStainNA scheme.
The RandStainNA constrains variable stain styles in a practicable range to train a stain agnostic deep learning model.
arXiv Detail & Related papers (2022-06-25T16:43:59Z) - StainNet: a fast and robust stain normalization network [0.7796684624647288]
This paper proposes a fast and robust stain normalization network with only 1.28K parameters named StainNet.
The proposed method performs well in stain normalization and achieves a better accuracy and image quality.
arXiv Detail & Related papers (2020-12-23T08:16:27Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.