DiffuseIR:Diffusion Models For Isotropic Reconstruction of 3D
Microscopic Images
- URL: http://arxiv.org/abs/2306.12109v1
- Date: Wed, 21 Jun 2023 08:49:28 GMT
- Title: DiffuseIR:Diffusion Models For Isotropic Reconstruction of 3D
Microscopic Images
- Authors: Mingjie Pan, Yulu Gan, Fangxu Zhou, Jiaming Liu, Aimin Wang, Shanghang
Zhang, Dawei Li
- Abstract summary: We propose DiffuseIR, an unsupervised method for isotropic reconstruction based on diffusion models.
First, we pre-train a diffusion model to learn the structural distribution of biological tissue from lateral microscopic images.
Then we use low-axial-resolution microscopy images to condition the generation process of the diffusion model and generate high-axial-resolution reconstruction results.
- Score: 20.49786054144047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Three-dimensional microscopy is often limited by anisotropic spatial
resolution, resulting in lower axial resolution than lateral resolution.
Current State-of-The-Art (SoTA) isotropic reconstruction methods utilizing deep
neural networks can achieve impressive super-resolution performance in fixed
imaging settings. However, their generality in practical use is limited by
degraded performance caused by artifacts and blurring when facing unseen
anisotropic factors. To address these issues, we propose DiffuseIR, an
unsupervised method for isotropic reconstruction based on diffusion models.
First, we pre-train a diffusion model to learn the structural distribution of
biological tissue from lateral microscopic images, resulting in generating
naturally high-resolution images. Then we use low-axial-resolution microscopy
images to condition the generation process of the diffusion model and generate
high-axial-resolution reconstruction results. Since the diffusion model learns
the universal structural distribution of biological tissues, which is
independent of the axial resolution, DiffuseIR can reconstruct authentic images
with unseen low-axial resolutions into a high-axial resolution without
requiring re-training. The proposed DiffuseIR achieves SoTA performance in
experiments on EM data and can even compete with supervised methods.
Related papers
- A Flow-based Truncated Denoising Diffusion Model for Super-resolution Magnetic Resonance Spectroscopic Imaging [34.32290273033808]
This work introduces a Flow-based Truncated Denoising Diffusion Model for super-resolution MRSI.
It shortens the diffusion process by truncating the diffusion chain, and the truncated steps are estimated using a normalizing flow-based network.
We demonstrate that FTDDM outperforms existing generative models while speeding up the sampling process by over 9-fold.
arXiv Detail & Related papers (2024-10-25T03:42:35Z) - Effective Diffusion Transformer Architecture for Image Super-Resolution [63.254644431016345]
We design an effective diffusion transformer for image super-resolution (DiT-SR)
In practice, DiT-SR leverages an overall U-shaped architecture, and adopts a uniform isotropic design for all the transformer blocks.
We analyze the limitation of the widely used AdaLN, and present a frequency-adaptive time-step conditioning module.
arXiv Detail & Related papers (2024-09-29T07:14:16Z) - Global Structure-Aware Diffusion Process for Low-Light Image Enhancement [64.69154776202694]
This paper studies a diffusion-based framework to address the low-light image enhancement problem.
We advocate for the regularization of its inherent ODE-trajectory.
Experimental evaluations reveal that the proposed framework attains distinguished performance in low-light enhancement.
arXiv Detail & Related papers (2023-10-26T17:01:52Z) - ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with
Diffusion Models [126.35334860896373]
We investigate the capability of generating images from pre-trained diffusion models at much higher resolutions than the training image sizes.
Existing works for higher-resolution generation, such as attention-based and joint-diffusion approaches, cannot well address these issues.
We propose a simple yet effective re-dilation that can dynamically adjust the convolutional perception field during inference.
arXiv Detail & Related papers (2023-10-11T17:52:39Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Reference-Free Isotropic 3D EM Reconstruction using Diffusion Models [8.590026259176806]
We propose a diffusion-model-based framework that overcomes the limitations of requiring reference data or prior knowledge about the degradation process.
Our approach utilizes 2D diffusion models to consistently reconstruct 3D volumes and is well-suited for highly downsampled data.
arXiv Detail & Related papers (2023-08-03T07:57:02Z) - SPIRiT-Diffusion: Self-Consistency Driven Diffusion Model for Accelerated MRI [14.545736786515837]
We introduce SPIRiT-Diffusion, a diffusion model for k-space inspired by the iterative self-consistent SPIRiT method.
We evaluate the proposed SPIRiT-Diffusion method using a 3D joint intracranial and carotid vessel wall imaging dataset.
arXiv Detail & Related papers (2023-04-11T08:43:52Z) - High-resolution tomographic reconstruction of optical absorbance through
scattering media using neural fields [25.647287240640356]
We propose NeuDOT, a novel DOT scheme based on neural fields (NF)
NeuDOT achieves submillimetre lateral resolution and resolves complex 3D objects at 14 mm-depth, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2023-04-04T10:13:13Z) - Implicit Diffusion Models for Continuous Super-Resolution [65.45848137914592]
This paper introduces an Implicit Diffusion Model (IDM) for high-fidelity continuous image super-resolution.
IDM integrates an implicit neural representation and a denoising diffusion model in a unified end-to-end framework.
The scaling factor regulates the resolution and accordingly modulates the proportion of the LR information and generated features in the final output.
arXiv Detail & Related papers (2023-03-29T07:02:20Z) - Axial-to-lateral super-resolution for 3D fluorescence microscopy using
unsupervised deep learning [19.515134844947717]
We present a deep-learning-enabled unsupervised super-resolution technique that enhances anisotropic images in fluorescence microscopy.
Our method greatly reduces the effort to put into practice as the training of a network requires as little as a single 3D image stack.
We demonstrate that the trained network not only enhances axial resolution beyond the diffraction limit, but also enhances suppressed visual details between the imaging planes and removes imaging artifacts.
arXiv Detail & Related papers (2021-04-19T16:31:12Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.