Between Generating Noise and Generating Images: Noise in the Correct
Frequency Improves the Quality of Synthetic Histopathology Images for Digital
Pathology
- URL: http://arxiv.org/abs/2302.06549v1
- Date: Mon, 13 Feb 2023 17:49:24 GMT
- Title: Between Generating Noise and Generating Images: Noise in the Correct
Frequency Improves the Quality of Synthetic Histopathology Images for Digital
Pathology
- Authors: Nati Daniel, Eliel Aknin, Ariel Larey, Yoni Peretz, Guy Sela, Yael
Fisher, Yonatan Savir
- Abstract summary: Synthetic images can augment existing datasets, to improve and validate AI algorithms.
We show that introducing random single-pixel noise with the appropriate spatial frequency into a semantic mask can dramatically improve the quality of the synthetic images.
Our work suggests a simple and powerful approach for generating synthetic data on demand to unbias limited datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence and machine learning techniques have the promise to
revolutionize the field of digital pathology. However, these models demand
considerable amounts of data, while the availability of unbiased training data
is limited. Synthetic images can augment existing datasets, to improve and
validate AI algorithms. Yet, controlling the exact distribution of cellular
features within them is still challenging. One of the solutions is harnessing
conditional generative adversarial networks that take a semantic mask as an
input rather than a random noise. Unlike other domains, outlining the exact
cellular structure of tissues is hard, and most of the input masks depict
regions of cell types. However, using polygon-based masks introduce inherent
artifacts within the synthetic images - due to the mismatch between the polygon
size and the single-cell size. In this work, we show that introducing random
single-pixel noise with the appropriate spatial frequency into a polygon
semantic mask can dramatically improve the quality of the synthetic images. We
used our platform to generate synthetic images of immunohistochemistry-treated
lung biopsies. We test the quality of the images using a three-fold validation
procedure. First, we show that adding the appropriate noise frequency yields
87% of the similarity metrics improvement that is obtained by adding the actual
single-cell features. Second, we show that the synthetic images pass the Turing
test. Finally, we show that adding these synthetic images to the train set
improves AI performance in terms of PD-L1 semantic segmentation performances.
Our work suggests a simple and powerful approach for generating synthetic data
on demand to unbias limited datasets to improve the algorithms' accuracy and
validate their robustness.
Related papers
- Time Step Generating: A Universal Synthesized Deepfake Image Detector [0.4488895231267077]
We propose a universal synthetic image detector Time Step Generating (TSG)
TSG does not rely on pre-trained models' reconstructing ability, specific datasets, or sampling algorithms.
We test the proposed TSG on the large-scale GenImage benchmark and it achieves significant improvements in both accuracy and generalizability.
arXiv Detail & Related papers (2024-11-17T09:39:50Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We present a sanity check on whether the task of AI-generated image detection has been solved.
To quantify the generalization of existing methods, we evaluate 9 off-the-shelf AI-generated image detectors on Chameleon dataset.
We propose AIDE (AI-generated Image DEtector with Hybrid Features), which leverages multiple experts to simultaneously extract visual artifacts and noise patterns.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Paired Diffusion: Generation of related, synthetic PET-CT-Segmentation scans using Linked Denoising Diffusion Probabilistic Models [0.0]
This research introduces a novel architecture that is able to generate multiple, related PET-CT-tumour mask pairs using paired networks and conditional encoders.
Our approach includes innovative, time step-controlled mechanisms and a noise-seeding' strategy to improve DDPM sampling consistency.
arXiv Detail & Related papers (2024-03-26T14:21:49Z) - Mask-conditioned latent diffusion for generating gastrointestinal polyp
images [2.027538200191349]
This study proposes a conditional DPM framework to generate synthetic GI polyp images conditioned on given segmentation masks.
Our system can generate an unlimited number of high-fidelity synthetic polyp images with the corresponding ground truth masks of polyps.
Results show that the best micro-imagewise IOU of 0.7751 was achieved from DeepLabv3+ when the training data consists of both real data and synthetic data.
arXiv Detail & Related papers (2023-04-11T14:11:17Z) - DEPAS: De-novo Pathology Semantic Masks using a Generative Model [0.0]
We introduce a scalable generative model, coined as DEPAS, that captures tissue structure and generates high-resolution semantic masks with state-of-the-art quality.
We demonstrate the ability of DEPAS to generate realistic semantic maps of tissue for three types of organs: skin, prostate, and lung.
arXiv Detail & Related papers (2023-02-13T16:48:33Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - You Only Need Adversarial Supervision for Semantic Image Synthesis [84.83711654797342]
We propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results.
We show that images synthesized by our model are more diverse and follow the color and texture of real images more closely.
arXiv Detail & Related papers (2020-12-08T23:00:48Z) - Image Translation for Medical Image Generation -- Ischemic Stroke
Lesions [0.0]
Synthetic databases with annotated pathologies could provide the required amounts of training data.
We train different image-to-image translation models to synthesize magnetic resonance images of brain volumes with and without stroke lesions.
We show that for a small database of only 10 or 50 clinical cases, synthetic data augmentation yields significant improvement.
arXiv Detail & Related papers (2020-10-05T09:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.