Robust semi-supervised segmentation with timestep ensembling diffusion
models
- URL: http://arxiv.org/abs/2311.07421v1
- Date: Mon, 13 Nov 2023 15:57:17 GMT
- Title: Robust semi-supervised segmentation with timestep ensembling diffusion
models
- Authors: Margherita Rosnati and Melanie Roschewitz and Ben Glocker
- Abstract summary: This work focuses on semi-supervised image segmentation using diffusion models.
We demonstrate that smaller diffusion steps generate latent representations that are more robust for downstream tasks than larger steps.
Our model shows significantly better performance in domain-shifted settings while retaining competitive performance in-domain.
- Score: 15.816699232409036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical image segmentation is a challenging task, made more difficult by many
datasets' limited size and annotations. Denoising diffusion probabilistic
models (DDPM) have recently shown promise in modelling the distribution of
natural images and were successfully applied to various medical imaging tasks.
This work focuses on semi-supervised image segmentation using diffusion models,
particularly addressing domain generalisation. Firstly, we demonstrate that
smaller diffusion steps generate latent representations that are more robust
for downstream tasks than larger steps. Secondly, we use this insight to
propose an improved esembling scheme that leverages information-dense small
steps and the regularising effect of larger steps to generate predictions. Our
model shows significantly better performance in domain-shifted settings while
retaining competitive performance in-domain. Overall, this work highlights the
potential of DDPMs for semi-supervised medical image segmentation and provides
insights into optimising their performance under domain shift.
Related papers
- Diffusion Based Ambiguous Image Segmentation [4.847141930102934]
We explore the design space of diffusion models for generative segmentation.
We find that making the noise schedule harder with input scaling significantly improves performance.
We base our experiments on the LIDC-IDRI lung lesion dataset and obtain state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2025-04-08T12:33:26Z) - Enhancing SAM with Efficient Prompting and Preference Optimization for Semi-supervised Medical Image Segmentation [30.524999223901645]
We propose an enhanced Segment Anything Model (SAM) framework that utilizes annotation-efficient prompts generated in a fully unsupervised fashion.
We adopt the direct preference optimization technique to design an optimal policy that enables the model to generate high-fidelity segmentations.
State-of-the-art performance of our framework in tasks such as lung segmentation, breast tumor segmentation, and organ segmentation across various modalities, including X-ray, ultrasound, and abdominal CT, justifies its effectiveness in low-annotation data scenarios.
arXiv Detail & Related papers (2025-03-06T17:28:48Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation [56.87049651707208]
Few-shot Semantic has evolved into In-context tasks, morphing into a crucial element in assessing generalist segmentation models.
Our initial focus lies in understanding how to facilitate interaction between the query image and the support image, resulting in the proposal of a KV fusion method within the self-attention framework.
Based on our analysis, we establish a simple and effective framework named DiffewS, maximally retaining the original Latent Diffusion Model's generative framework.
arXiv Detail & Related papers (2024-10-03T10:33:49Z) - Denoising Diffusions in Latent Space for Medical Image Segmentation [14.545920180010201]
Diffusion models (DPMs) have demonstrated remarkable performance in image generation, often times outperforming other generative models.
We propose a novel conditional generative modeling framework (LDSeg) that performs diffusion in latent space for medical image segmentation.
arXiv Detail & Related papers (2024-07-17T18:44:38Z) - Diffusion Models Without Attention [110.5623058129782]
Diffusion State Space Model (DiffuSSM) is an architecture that supplants attention mechanisms with a more scalable state space model backbone.
Our focus on FLOP-efficient architectures in diffusion training marks a significant step forward.
arXiv Detail & Related papers (2023-11-30T05:15:35Z) - On the Out of Distribution Robustness of Foundation Models in Medical
Image Segmentation [47.95611203419802]
Foundations for vision and language, pre-trained on extensive sets of natural image and text data, have emerged as a promising approach.
We compare the generalization performance to unseen domains of various pre-trained models after being fine-tuned on the same in-distribution dataset.
We further developed a new Bayesian uncertainty estimation for frozen models and used them as an indicator to characterize the model's performance on out-of-distribution data.
arXiv Detail & Related papers (2023-11-18T14:52:10Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - SDDM: Score-Decomposed Diffusion Models on Manifolds for Unpaired
Image-to-Image Translation [96.11061713135385]
This work presents a new score-decomposed diffusion model to explicitly optimize the tangled distributions during image generation.
We equalize the refinement parts of the score function and energy guidance, which permits multi-objective optimization on the manifold.
SDDM outperforms existing SBDM-based methods with much fewer diffusion steps on several I2I benchmarks.
arXiv Detail & Related papers (2023-08-04T06:21:57Z) - Conditional Diffusion Models for Weakly Supervised Medical Image
Segmentation [18.956306942099097]
Conditional diffusion models (CDM) is capable of generating images subject to specific distributions.
We utilize category-aware semantic information underlied in CDM to get the prediction mask of the target object.
Our method outperforms state-of-the-art CAM and diffusion model methods on two public medical image segmentation datasets.
arXiv Detail & Related papers (2023-06-06T17:29:26Z) - Denoising Diffusion Semantic Segmentation with Mask Prior Modeling [61.73352242029671]
We propose to ameliorate the semantic segmentation quality of existing discriminative approaches with a mask prior modeled by a denoising diffusion generative model.
We evaluate the proposed prior modeling with several off-the-shelf segmentors, and our experimental results on ADE20K and Cityscapes demonstrate that our approach could achieve competitively quantitative performance.
arXiv Detail & Related papers (2023-06-02T17:47:01Z) - Label-Efficient Semantic Segmentation with Diffusion Models [27.01899943738203]
We demonstrate that diffusion models can also serve as an instrument for semantic segmentation.
In particular, for several pretrained diffusion models, we investigate the intermediate activations from the networks that perform the Markov step of the reverse diffusion process.
We show that these activations effectively capture the semantic information from an input image and appear to be excellent pixel-level representations for the segmentation problem.
arXiv Detail & Related papers (2021-12-06T15:55:30Z) - Realistic Adversarial Data Augmentation for MR Image Segmentation [17.951034264146138]
We propose an adversarial data augmentation method for training neural networks for medical image segmentation.
Our model generates plausible and realistic signal corruptions, which models the intensity inhomogeneities caused by a common type of artefacts in MR imaging: bias field.
We show that such an approach can improve the ability generalization and robustness of models as well as provide significant improvements in low-data scenarios.
arXiv Detail & Related papers (2020-06-23T20:43:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.