Artifact Restoration in Histology Images with Diffusion Probabilistic
Models
- URL: http://arxiv.org/abs/2307.14262v1
- Date: Wed, 26 Jul 2023 15:50:02 GMT
- Title: Artifact Restoration in Histology Images with Diffusion Probabilistic
Models
- Authors: Zhenqi He, Junjun He, Jin Ye, Yiqing Shen
- Abstract summary: Histological whole slide images (WSIs) can be usually compromised by artifacts, such as tissue folding and bubbles.
Existing approaches to restoring artifact images are confined to Generative Adversarial Networks (GANs)
We make the first attempt at a denoising diffusion probabilistic model for histological artifact restoration, namely ArtiFusion.
- Score: 10.016731839549259
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Histological whole slide images (WSIs) can be usually compromised by
artifacts, such as tissue folding and bubbles, which will increase the
examination difficulty for both pathologists and Computer-Aided Diagnosis (CAD)
systems. Existing approaches to restoring artifact images are confined to
Generative Adversarial Networks (GANs), where the restoration process is
formulated as an image-to-image transfer. Those methods are prone to suffer
from mode collapse and unexpected mistransfer in the stain style, leading to
unsatisfied and unrealistic restored images. Innovatively, we make the first
attempt at a denoising diffusion probabilistic model for histological artifact
restoration, namely ArtiFusion.Specifically, ArtiFusion formulates the artifact
region restoration as a gradual denoising process, and its training relies
solely on artifact-free images to simplify the training complexity.Furthermore,
to capture local-global correlations in the regional artifact restoration, a
novel Swin-Transformer denoising architecture is designed, along with a time
token scheme. Our extensive evaluations demonstrate the effectiveness of
ArtiFusion as a pre-processing method for histology analysis, which can
successfully preserve the tissue structures and stain style in artifact-free
regions during the restoration. Code is available at
https://github.com/zhenqi-he/ArtiFusion.
Related papers
- DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - One-step Generative Diffusion for Realistic Extreme Image Rescaling [47.89362819768323]
We propose a novel framework called One-Step Image Rescaling Diffusion (OSIRDiff) for extreme image rescaling.
OSIRDiff performs rescaling operations in the latent space of a pre-trained autoencoder.
It effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - LatentArtiFusion: An Effective and Efficient Histological Artifacts Restoration Framework [22.525991027687084]
Current approaches for histological artifact restoration, based on Generative Adversarial Networks (GANs) and pixel-level Diffusion Models, suffer from performance limitations and computational inefficiencies.
We propose a novel framework, LatentArtiFusion, which leverages the latent diffusion model (LDM) to reconstruct histological artifacts with high performance and computational efficiency.
arXiv Detail & Related papers (2024-07-29T17:00:32Z) - SSP-IR: Semantic and Structure Priors for Diffusion-based Realistic Image Restoration [20.873676111265656]
SSP-IR aims to fully exploit semantic and structure priors from low-quality images.
Our method outperforms other state-of-the-art methods overall on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-07-04T04:55:14Z) - Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.
In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.
We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - FreeSeed: Frequency-band-aware and Self-guided Network for Sparse-view
CT Reconstruction [34.91517935951518]
Sparse-view computed tomography (CT) is a promising solution for expediting the scanning process and mitigating radiation exposure to patients.
Recently, deep learning-based image post-processing methods have shown promising results.
We propose a simple yet effective FREquency-band-awarE and SElf-guidED network, termed FreeSeed, which can effectively remove artifact and recover missing detail.
arXiv Detail & Related papers (2023-07-12T03:39:54Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Deep Sinogram Completion with Image Prior for Metal Artifact Reduction
in CT Images [29.019325663195627]
Computed tomography (CT) has been widely used for medical diagnosis, assessment, and therapy planning and guidance.
CT images may be affected adversely in the presence of metallic objects, which could lead to severe metal artifacts.
We propose a generalizable framework for metal artifact reduction (MAR) by simultaneously leveraging the advantages of image domain and sinogram domain-based MAR techniques.
arXiv Detail & Related papers (2020-09-16T04:43:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.