Bringing together invertible UNets with invertible attention modules for memory-efficient diffusion models
- URL: http://arxiv.org/abs/2504.10883v1
- Date: Tue, 15 Apr 2025 05:26:42 GMT
- Title: Bringing together invertible UNets with invertible attention modules for memory-efficient diffusion models
- Authors: Karan Jain, Mohammad Nayeem Teli,
- Abstract summary: We propose a novel architecture for a single GPU memory-efficient training for diffusion models for high dimensional medical datasets.<n>The proposed model is built by using an invertible UNet architecture with invertible attention modules.<n>While this new model can be applied to a multitude of image generation tasks, we showcase its memory-efficiency on the 3D BraTS 2020 dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have recently gained state of the art performance on many image generation tasks. However, most models require significant computational resources to achieve this. This becomes apparent in the application of medical image synthesis due to the 3D nature of medical datasets like CT-scans, MRIs, electron microscope, etc. In this paper we propose a novel architecture for a single GPU memory-efficient training for diffusion models for high dimensional medical datasets. The proposed model is built by using an invertible UNet architecture with invertible attention modules. This leads to the following two contributions: 1. denoising diffusion models and thus enabling memory usage to be independent of the dimensionality of the dataset, and 2. reducing the energy usage during training. While this new model can be applied to a multitude of image generation tasks, we showcase its memory-efficiency on the 3D BraTS2020 dataset leading to up to 15\% decrease in peak memory consumption during training with comparable results to SOTA while maintaining the image quality.
Related papers
- Memory-Efficient 3D High-Resolution Medical Image Synthesis Using CRF-Guided GANs [47.873227167456136]
We propose an end-to-end novel GAN architecture that uses Conditional Random field (CRF) to model dependencies.<n>Our architecture outperforms state-of-the-art while it has lower memory usage and less complexity.
arXiv Detail & Related papers (2025-03-13T21:31:15Z) - SegResMamba: An Efficient Architecture for 3D Medical Image Segmentation [2.979183050755201]
We propose an efficient 3D segmentation model for medical imaging called SegResMamba.
Our model uses less than half the memory during training compared to other state-of-the-art (SOTA) architectures.
arXiv Detail & Related papers (2025-03-10T18:40:28Z) - HoloDiffusion: Training a 3D Diffusion Model using 2D Images [71.1144397510333]
We introduce a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision.
We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
arXiv Detail & Related papers (2023-03-29T07:35:56Z) - Memory-Efficient 3D Denoising Diffusion Models for Medical Image Processing [0.9424565541639366]
We present a number of ways to reduce the resource consumption for 3D diffusion models.
The main contribution of this paper is the memory-efficient patch-based diffusion model.
While the proposed diffusion model can be applied to any image generation tasks, we evaluate the method on the tumor segmentation task of the BraTS 2020 dataset.
arXiv Detail & Related papers (2023-03-27T15:10:19Z) - Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models [33.343489006271255]
Diffusion models have emerged as the new state-of-the-art generative model with high quality samples.
We propose to augment the 2D diffusion prior with a model-based prior in the remaining direction at test time, such that one can achieve coherent reconstructions across all dimensions.
Our method can be run in a single commodity GPU, and establishes the new state-of-the-art.
arXiv Detail & Related papers (2022-11-19T10:32:21Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.