Assessing the capacity of a denoising diffusion probabilistic model to
reproduce spatial context
- URL: http://arxiv.org/abs/2309.10817v1
- Date: Tue, 19 Sep 2023 17:58:35 GMT
- Title: Assessing the capacity of a denoising diffusion probabilistic model to
reproduce spatial context
- Authors: Rucha Deshpande, Muzaffer \"Ozbey, Hua Li, Mark A. Anastasio, Frank J.
Brooks
- Abstract summary: Diffusion probabilistic models (DDPMs) demonstrate superior image synthesis performance as compared to generative adversarial networks (GANs)
These claims have been evaluated using either ensemble-based methods designed for natural images, or conventional measures of image quality such as structural similarity.
The studies reveal new and important insights regarding the capacity of DDPMs to learn spatial context.
- Score: 7.289988602420457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have emerged as a popular family of deep generative models
(DGMs). In the literature, it has been claimed that one class of diffusion
models -- denoising diffusion probabilistic models (DDPMs) -- demonstrate
superior image synthesis performance as compared to generative adversarial
networks (GANs). To date, these claims have been evaluated using either
ensemble-based methods designed for natural images, or conventional measures of
image quality such as structural similarity. However, there remains an
important need to understand the extent to which DDPMs can reliably learn
medical imaging domain-relevant information, which is referred to as `spatial
context' in this work. To address this, a systematic assessment of the ability
of DDPMs to learn spatial context relevant to medical imaging applications is
reported for the first time. A key aspect of the studies is the use of
stochastic context models (SCMs) to produce training data. In this way, the
ability of the DDPMs to reliably reproduce spatial context can be
quantitatively assessed by use of post-hoc image analyses. Error-rates in
DDPM-generated ensembles are reported, and compared to those corresponding to a
modern GAN. The studies reveal new and important insights regarding the
capacity of DDPMs to learn spatial context. Notably, the results demonstrate
that DDPMs hold significant capacity for generating contextually correct images
that are `interpolated' between training samples, which may benefit
data-augmentation tasks in ways that GANs cannot.
Related papers
- Synthetic Augmentation for Anatomical Landmark Localization using DDPMs [0.22499166814992436]
diffusion-based generative models have recently started to gain attention for their ability to generate high-quality synthetic images.
We propose a novel way to assess the quality of the generated images using a Markov Random Field (MRF) model for landmark matching and a Statistical Shape Model (SSM) to check landmark plausibility.
arXiv Detail & Related papers (2024-10-16T12:09:38Z) - Towards a Theoretical Understanding of Memorization in Diffusion Models [76.85077961718875]
Diffusion probabilistic models (DPMs) are being employed as mainstream models for Generative Artificial Intelligence (GenAI)
We provide a theoretical understanding of memorization in both conditional and unconditional DPMs under the assumption of model convergence.
We propose a novel data extraction method named textbfSurrogate condItional Data Extraction (SIDE) that leverages a time-dependent classifier trained on the generated data as a surrogate condition to extract training data from unconditional DPMs.
arXiv Detail & Related papers (2024-10-03T13:17:06Z) - Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation [56.87049651707208]
Few-shot Semantic has evolved into In-context tasks, morphing into a crucial element in assessing generalist segmentation models.
Our initial focus lies in understanding how to facilitate interaction between the query image and the support image, resulting in the proposal of a KV fusion method within the self-attention framework.
Based on our analysis, we establish a simple and effective framework named DiffewS, maximally retaining the original Latent Diffusion Model's generative framework.
arXiv Detail & Related papers (2024-10-03T10:33:49Z) - Cross-conditioned Diffusion Model for Medical Image to Image Translation [22.020931436223204]
We introduce a Cross-conditioned Diffusion Model (CDM) for medical image-to-image translation.
First, we propose a Modality-specific Representation Model (MRM) to model the distribution of target modalities.
Then, we design a Modality-decoupled Diffusion Network (MDN) to efficiently and effectively learn the distribution from MRM.
arXiv Detail & Related papers (2024-09-13T02:48:56Z) - SAR Image Synthesis with Diffusion Models [0.0]
diffusion models (DMs) have become a popular method for generating synthetic data.
In this work, a specific type of DMs, namely denoising diffusion probabilistic model (DDPM) is adapted to the SAR domain.
We show that DDPM qualitatively and quantitatively outperforms state-of-the-art GAN-based methods for SAR image generation.
arXiv Detail & Related papers (2024-05-13T14:21:18Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Diffusion Model as Representation Learner [86.09969334071478]
Diffusion Probabilistic Models (DPMs) have recently demonstrated impressive results on various generative tasks.
We propose a novel knowledge transfer method that leverages the knowledge acquired by DPMs for recognition tasks.
arXiv Detail & Related papers (2023-08-21T00:38:39Z) - Zero-shot Medical Image Translation via Frequency-Guided Diffusion
Models [9.15810015583615]
We propose a frequency-guided diffusion model (FGDM) that employs frequency-domain filters to guide the diffusion model for structure-preserving image translation.
Based on its design, FGDM allows zero-shot learning, as it can be trained solely on the data from the target domain, and used directly for source-to-target domain translation.
FGDM outperformed the state-of-the-art methods (GAN-based, VAE-based, and diffusion-based) in metrics of Frechet Inception Distance (FID), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity
arXiv Detail & Related papers (2023-04-05T20:47:40Z) - MAUVE Scores for Generative Models: Theory and Practice [95.86006777961182]
We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images.
We find that MAUVE can quantify the gaps between the distributions of human-written text and those of modern neural language models.
We demonstrate in the vision domain that MAUVE can identify known properties of generated images on par with or better than existing metrics.
arXiv Detail & Related papers (2022-12-30T07:37:40Z) - Progressively-Growing AmbientGANs For Learning Stochastic Object Models
From Imaging Measurements [14.501812971529137]
objective optimization of medical imaging systems requires full characterization of all sources of randomness in the measured data.
We propose establishing an object model (SOM) that describes the variability in the class of objects to-be-imaged.
Because medical imaging systems record imaging measurements that are noisy and indirect representations of object properties, GANs cannot be directly applied to establish models of objects to-be-imaged.
arXiv Detail & Related papers (2020-01-26T21:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.