Ordinal Diffusion Models for Color Fundus Images
- URL: http://arxiv.org/abs/2602.24013v1
- Date: Fri, 27 Feb 2026 13:36:28 GMT
- Title: Ordinal Diffusion Models for Color Fundus Images
- Authors: Gustav Schmidt, Philipp Berens, Sarah Müller,
- Abstract summary: Most conditional diffusion models treat disease stages as independent classes, ignoring the continuous nature of disease progression.<n>We propose an ordinal latent diffusion model for generating color fundus images that explicitly incorporates the ordered structure of diabetic retinopathy severity into the generation process.
- Score: 5.6629123221764965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It has been suggested that generative image models such as diffusion models can improve performance on clinically relevant tasks by offering deep learning models supplementary training data. However, most conditional diffusion models treat disease stages as independent classes, ignoring the continuous nature of disease progression. This mismatch is problematic in medical imaging because continuous pathological processes are typically only observed through coarse, discrete but ordered labels as in ophthalmology for diabetic retinopathy (DR). We propose an ordinal latent diffusion model for generating color fundus images that explicitly incorporates the ordered structure of DR severity into the generation process. Instead of categorical conditioning, we used a scalar disease representation, enabling a smooth transition between adjacent stages. We evaluated our approach using visual realism metrics and classification-based clinical consistency analysis on the EyePACS dataset. Compared to a standard conditional diffusion model, our model reduced the Fréchet inception distance for four of the five DR stages and increased the quadratic weighted $κ$ from 0.79 to 0.87. Furthermore, interpolation experiments showed that the model captured a continuous spectrum of disease progression learned from ordered, coarse class labels.
Related papers
- Do Pathology Foundation Models Encode Disease Progression? A Pseudotime Analysis of Visual Representations [0.0]
We show vision foundation models can implicitly learn to represent continuous processes from independent static observations.<n>This framework could be applied to other domains where continuous processes are observed through static snapshots.
arXiv Detail & Related papers (2026-01-29T06:50:43Z) - FoundDiff: Foundational Diffusion Model for Generalizable Low-Dose CT Denoising [55.04342933312839]
We propose FoundDiff, a foundational diffusion model for unified and generalizable low-dose computed tomography (CT) denoising.<n>FoundDiff employs a two-stage strategy: (i) dose-anatomy perception and (ii) adaptive denoising.<n>First, we develop a dose- and anatomy-aware contrastive language image pre-training model (DA-CLIP) to achieve robust dose and anatomy perception.<n>Second, we design a dose- and anatomy-aware diffusion model (DA-Diff) to perform adaptive and generalizable denoising.
arXiv Detail & Related papers (2025-08-24T11:03:56Z) - Parameterized Diffusion Optimization enabled Autoregressive Ordinal Regression for Diabetic Retinopathy Grading [53.11883409422728]
This work proposes a novel autoregressive ordinal regression method called AOR-DR.<n>We decompose the diabetic retinopathy grading task into a series of ordered steps by fusing the prediction of the previous steps with extracted image features.<n>We exploit the diffusion process to facilitate conditional probability modeling, enabling the direct use of continuous global image features for autoregression.
arXiv Detail & Related papers (2025-07-07T13:22:35Z) - One-for-More: Continual Diffusion Model for Anomaly Detection [63.50488826645681]
Anomaly detection methods utilize diffusion models to generate or reconstruct normal samples when given arbitrary anomaly images.<n>Our study found that the diffusion model suffers from severe faithfulness hallucination'' and catastrophic forgetting''<n>We propose a continual diffusion model that uses gradient projection to achieve stable continual learning.
arXiv Detail & Related papers (2025-02-27T07:47:27Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.<n>We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Similarity-aware Syncretic Latent Diffusion Model for Medical Image Translation with Representation Learning [15.234393268111845]
Non-contrast CT (NCCT) imaging may reduce image contrast and anatomical visibility, potentially increasing diagnostic uncertainty.
We propose a novel Syncretic generative model based on the latent diffusion model for medical image translation (S$2$LDM)
S$2$LDM enhances the similarity in distinct modal images via syncretic encoding and diffusing, promoting amalgamated information in the latent space and generating medical images with more details in contrast-enhanced regions.
arXiv Detail & Related papers (2024-06-20T03:54:41Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Generating Realistic Counterfactuals for Retinal Fundus and OCT Images
using Diffusion Models [36.81751569090276]
Counterfactual reasoning is often used in clinical settings to explain decisions or weigh alternatives.
Here, we demonstrate that using a diffusion model in combination with an adversarially robust classifier trained on retinal disease classification tasks enables the generation of highly realistic counterfactuals.
In a user study, domain experts found the counterfactuals generated using our method significantly more realistic than counterfactuals generated from a previous method, and even indistinguishable from real images.
arXiv Detail & Related papers (2023-11-20T09:28:04Z) - DiffMIC: Dual-Guidance Diffusion Network for Medical Image
Classification [32.67098520984195]
We propose the first diffusion-based model (named DiffMIC) to address general medical image classification.
Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2023-03-19T09:15:45Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Cost-Sensitive Regularization for Diabetic Retinopathy Grading from Eye
Fundus Images [20.480034690570196]
We propose a straightforward approach to enforce the constraint for the task of predicting Diabetic Retinopathy (DR) severity from eye fundus images.
We expand standard classification losses with an extra term that acts as a regularizer.
We show how to adapt our method to the modelling of label noise in each of the sub-problems associated to DR grading.
arXiv Detail & Related papers (2020-10-01T10:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.