fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting
- URL: http://arxiv.org/abs/2507.13146v1
- Date: Thu, 17 Jul 2025 14:10:51 GMT
- Title: fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting
- Authors: Alicia Durrer, Florentin Bieder, Paul Friedrich, Bjoern Menze, Philippe C. Cattin, Florian Kofler,
- Abstract summary: We present a 3D wavelet diffusion model (WDM3D) that does not include a GAN component.<n>Our model is up to 800 x faster while still achieving superior performance metrics.<n>Our proposed method, fastWDM3D, represents a promising approach for fast and accurate healthy tissue inpainting.
- Score: 1.9637775017749974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Healthy tissue inpainting has significant applications, including the generation of pseudo-healthy baselines for tumor growth models and the facilitation of image registration. In previous editions of the BraTS Local Synthesis of Healthy Brain Tissue via Inpainting Challenge, denoising diffusion probabilistic models (DDPMs) demonstrated qualitatively convincing results but suffered from low sampling speed. To mitigate this limitation, we adapted a 2D image generation approach, combining DDPMs with generative adversarial networks (GANs) and employing a variance-preserving noise schedule, for the task of 3D inpainting. Our experiments showed that the variance-preserving noise schedule and the selected reconstruction losses can be effectively utilized for high-quality 3D inpainting in a few time steps without requiring adversarial training. We applied our findings to a different architecture, a 3D wavelet diffusion model (WDM3D) that does not include a GAN component. The resulting model, denoted as fastWDM3D, obtained a SSIM of 0.8571, a MSE of 0.0079, and a PSNR of 22.26 on the BraTS inpainting test set. Remarkably, it achieved these scores using only two time steps, completing the 3D inpainting process in 1.81 s per image. When compared to other DDPMs used for healthy brain tissue inpainting, our model is up to 800 x faster while still achieving superior performance metrics. Our proposed method, fastWDM3D, represents a promising approach for fast and accurate healthy tissue inpainting. Our code is available at https://github.com/AliciaDurrer/fastWDM3D.
Related papers
- U-Net Based Healthy 3D Brain Tissue Inpainting [5.347187213114967]
This paper introduces a novel approach to synthesize healthy 3D brain tissue from masked input images.<n>Our proposed method employs a U-Net-based architecture, which is designed to effectively reconstruct the missing or corrupted regions of brain MRI scans.<n>Our model is trained on the BraTS-Local-Inpainting dataset and demonstrates the exceptional performance in recovering healthy brain tissue.
arXiv Detail & Related papers (2025-07-24T06:26:46Z) - Hierarchical Diffusion Framework for Pseudo-Healthy Brain MRI Inpainting with Enhanced 3D Consistency [3.4844189568364348]
Pseudo-healthy image inpainting is an essential preprocessing step for analyzing pathological brain MRI scans.<n>Most current inpainting methods favor slice-wise 2D models for their high in-plane fidelity, but their independence across slices produces discontinuities in the volume.<n>We address these limitations with a hierarchical diffusion framework by replacing direct 3D modeling with two coarse-to-fine 2D stages.
arXiv Detail & Related papers (2025-07-23T20:21:29Z) - Consistency^2: Consistent and Fast 3D Painting with Latent Consistency Models [29.818123424954294]
Generative 3D Painting is among the top productivity boosters in high-resolution 3D asset management and recycling.
We propose a Latent Consistency Model (LCM) adaptation for the task at hand.
We analyze the strengths and weaknesses of the proposed model and evaluate it quantitatively and qualitatively.
arXiv Detail & Related papers (2024-06-17T04:40:07Z) - Gamba: Marry Gaussian Splatting with Mamba for single view 3D reconstruction [153.52406455209538]
Gamba is an end-to-end 3D reconstruction model from a single-view image.
It completes reconstruction within 0.05 seconds on a single NVIDIA A100 GPU.
arXiv Detail & Related papers (2024-03-27T17:40:14Z) - Denoising Diffusion Models for Inpainting of Healthy Brain Tissue [0.7022492404644499]
This paper is a contribution to the "BraTS 2023 Local Synthesis of Healthy Brain Tissue via Inpainting Challenge"
The task of this challenge is to transform tumor tissue into healthy tissue in brain magnetic resonance (MR) images.
We use a 2D model that is trained using slices in which healthy tissue was cropped out and is learned to be inpainted again.
arXiv Detail & Related papers (2024-02-27T08:31:39Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Instant3D: Fast Text-to-3D with Sparse-View Generation and Large
Reconstruction Model [68.98311213582949]
We propose Instant3D, a novel method that generates high-quality and diverse 3D assets from text prompts in a feed-forward manner.
Our method can generate diverse 3D assets of high visual quality within 20 seconds, two orders of magnitude faster than previous optimization-based methods.
arXiv Detail & Related papers (2023-11-10T18:03:44Z) - DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation [55.661467968178066]
We propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously.
Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space.
In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks.
arXiv Detail & Related papers (2023-09-28T17:55:05Z) - TextMesh: Generation of Realistic 3D Meshes From Text Prompts [56.2832907275291]
We propose a novel method for generation of highly realistic-looking 3D meshes.
To this end, we extend NeRF to employ an SDF backbone, leading to improved 3D mesh extraction.
arXiv Detail & Related papers (2023-04-24T20:29:41Z) - IC3D: Image-Conditioned 3D Diffusion for Shape Generation [4.470499157873342]
Denoising Diffusion Probabilistic Models (DDPMs) have demonstrated exceptional performance in various 2D generative tasks.
We introduce CISP (Contrastive Image-Shape Pre-training), obtaining a well-structured image-shape joint embedding space.
We then introduce IC3D, a DDPM that harnesses CISP's guidance for 3D shape generation from single-view images.
arXiv Detail & Related papers (2022-11-20T04:21:42Z) - Magic3D: High-Resolution Text-to-3D Content Creation [78.40092800817311]
DreamFusion has recently demonstrated the utility of a pre-trained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF)
In this paper, we address these limitations by utilizing a two-stage optimization framework.
Our method, dubbed Magic3D, can create high quality 3D mesh models in 40 minutes, which is 2x faster than DreamFusion.
arXiv Detail & Related papers (2022-11-18T18:59:59Z) - Inflating 2D Convolution Weights for Efficient Generation of 3D Medical
Images [35.849240945334]
Two problems prevent effective training of a 3D medical generative model: 3D medical images are expensive to acquire and annotate, and a large number of parameters are involved in 3D convolution.
We propose a novel GAN model called 3D Split&Shuffle-GAN.
We show that our method leads to improved 3D image generation quality with significantly fewer parameters.
arXiv Detail & Related papers (2022-08-08T06:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.