FSDiffReg: Feature-wise and Score-wise Diffusion-guided Unsupervised
Deformable Image Registration for Cardiac Images
- URL: http://arxiv.org/abs/2307.12035v1
- Date: Sat, 22 Jul 2023 10:09:22 GMT
- Title: FSDiffReg: Feature-wise and Score-wise Diffusion-guided Unsupervised
Deformable Image Registration for Cardiac Images
- Authors: Yi Qin and Xiaomeng Li
- Abstract summary: Unsupervised deformable image registration is one of the challenging tasks in medical imaging.
We present two modules: Feature-wise Diffusion-Guided Module and Score-wise Diffusion-Guided Module.
Experiment results on the 3D medical cardiac image registration task validate our model's ability to provide refined deformation fields with preserved topology effectively.
- Score: 12.20081061919718
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised deformable image registration is one of the challenging tasks in
medical imaging. Obtaining a high-quality deformation field while preserving
deformation topology remains demanding amid a series of deep-learning-based
solutions. Meanwhile, the diffusion model's latent feature space shows
potential in modeling the deformation semantics. To fully exploit the diffusion
model's ability to guide the registration task, we present two modules:
Feature-wise Diffusion-Guided Module (FDG) and Score-wise Diffusion-Guided
Module (SDG). Specifically, FDG uses the diffusion model's multi-scale semantic
features to guide the generation of the deformation field. SDG uses the
diffusion score to guide the optimization process for preserving deformation
topology with barely any additional computation. Experiment results on the 3D
medical cardiac image registration task validate our model's ability to provide
refined deformation fields with preserved topology effectively. Code is
available at: https://github.com/xmed-lab/FSDiffReg.git.
Related papers
- LDM-Morph: Latent diffusion model guided deformable image registration [2.8195553455247317]
We propose LDM-Morph, an unsupervised deformable registration algorithm for medical image registration.
LDM-Morph integrated features extracted from the latent diffusion model (LDM) to enrich the semantic information.
Extensive experiments on four public 2D cardiac image datasets show that the proposed LDM-Morph framework outperformed existing state-of-the-art CNNs- and Transformers-based registration methods.
arXiv Detail & Related papers (2024-11-23T03:04:36Z) - KLDD: Kalman Filter based Linear Deformable Diffusion Model in Retinal Image Segmentation [51.03868117057726]
This paper proposes a novel Kalman filter based Linear Deformable Diffusion (KLDD) model for retinal vessel segmentation.
Our model employs a diffusion process that iteratively refines the segmentation, leveraging the flexible receptive fields of deformable convolutions.
Experiments are evaluated on retinal fundus image datasets (DRIVE, CHASE_DB1) and the 3mm and 6mm of the OCTA-500 dataset.
arXiv Detail & Related papers (2024-09-19T14:21:38Z) - Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis [13.629617915974531]
Deformation-Recovery Diffusion Model (DRDM) is a diffusion-based generative model based on deformation diffusion and recovery.
DRDM is trained to learn to recover unreasonable deformation components, thereby restoring each randomly deformed image to a realistic distribution.
Experimental results in cardiac MRI and pulmonary CT show DRDM is capable of creating diverse, large (over 10% image size deformation scale) deformations.
arXiv Detail & Related papers (2024-07-10T01:26:48Z) - The Journey, Not the Destination: How Data Guides Diffusion Models [75.19694584942623]
Diffusion models trained on large datasets can synthesize photo-realistic images of remarkable quality and diversity.
We propose a framework that: (i) provides a formal notion of data attribution in the context of diffusion models, and (ii) allows us to counterfactually validate such attributions.
arXiv Detail & Related papers (2023-12-11T08:39:43Z) - Introducing Shape Prior Module in Diffusion Model for Medical Image
Segmentation [7.7545714516743045]
We propose an end-to-end framework called VerseDiff-UNet, which leverages the denoising diffusion probabilistic model (DDPM)
Our approach integrates the diffusion model into a standard U-shaped architecture.
We evaluate our method on a single dataset of spine images acquired through X-ray imaging.
arXiv Detail & Related papers (2023-09-12T03:05:00Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Diffusion Models as Masked Autoencoders [52.442717717898056]
We revisit generatively pre-training visual representations in light of recent interest in denoising diffusion models.
While directly pre-training with diffusion models does not produce strong representations, we condition diffusion models on masked input and formulate diffusion models as masked autoencoders (DiffMAE)
We perform a comprehensive study on the pros and cons of design choices and build connections between diffusion models and masked autoencoders.
arXiv Detail & Related papers (2023-04-06T17:59:56Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z) - DiffuseMorph: Unsupervised Deformable Image Registration Along
Continuous Trajectory Using Diffusion Models [31.826844124173984]
We present a novel approach of diffusion model-based probabilistic image registration, called DiffuseMorph.
Our model learns the score function of the deformation between moving and fixed images.
Our method can provide flexible and accurate deformation with a capability of topology preservation.
arXiv Detail & Related papers (2021-12-09T08:41:23Z) - Unsupervised Deep-Learning Based Deformable Image Registration: A
Bayesian Framework [0.0]
We introduce a fully Bayesian framework for unsupervised DL-based deformable image registration.
Our method provides a way to characterize the true posterior distribution, thus, avoiding potential over-fitting.
Our approach provides better estimates of the deformation field by means of improved mean-squared-error.
arXiv Detail & Related papers (2020-08-10T08:15:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.