Reference-Free Isotropic 3D EM Reconstruction using Diffusion Models
- URL: http://arxiv.org/abs/2308.01594v1
- Date: Thu, 3 Aug 2023 07:57:02 GMT
- Title: Reference-Free Isotropic 3D EM Reconstruction using Diffusion Models
- Authors: Kyungryun Lee and Won-Ki Jeong
- Abstract summary: We propose a diffusion-model-based framework that overcomes the limitations of requiring reference data or prior knowledge about the degradation process.
Our approach utilizes 2D diffusion models to consistently reconstruct 3D volumes and is well-suited for highly downsampled data.
- Score: 8.590026259176806
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Electron microscopy (EM) images exhibit anisotropic axial resolution due to
the characteristics inherent to the imaging modality, presenting challenges in
analysis and downstream tasks.In this paper, we propose a diffusion-model-based
framework that overcomes the limitations of requiring reference data or prior
knowledge about the degradation process. Our approach utilizes 2D diffusion
models to consistently reconstruct 3D volumes and is well-suited for highly
downsampled data. Extensive experiments conducted on two public datasets
demonstrate the robustness and superiority of leveraging the generative prior
compared to supervised learning methods. Additionally, we demonstrate our
method's feasibility for self-supervised reconstruction, which can restore a
single anisotropic volume without any training data.
Related papers
- Efficient One-Step Diffusion Refinement for Snapshot Compressive Imaging [8.819370643243012]
Coded Aperture Snapshot Spectral Imaging (CASSI) is a crucial technique for capturing three-dimensional multispectral images (MSIs)
Current state-of-the-art methods, predominantly end-to-end, face limitations in reconstructing high-frequency details.
This paper introduces a novel one-step Diffusion Probabilistic Model within a self-supervised adaptation framework for Snapshot Compressive Imaging.
arXiv Detail & Related papers (2024-09-11T17:02:10Z) - SeisFusion: Constrained Diffusion Model with Input Guidance for 3D Seismic Data Interpolation and Reconstruction [26.02191880837226]
We propose a novel diffusion model reconstruction framework tailored for 3D seismic data.
We introduce a 3D neural network architecture into the diffusion model, successfully extending the 2D diffusion model to 3D space.
Our method exhibits superior reconstruction accuracy when applied to both field datasets and synthetic datasets.
arXiv Detail & Related papers (2024-03-18T05:10:13Z) - A Generative Machine Learning Model for Material Microstructure 3D
Reconstruction and Performance Evaluation [4.169915659794567]
The dimensional extension from 2D to 3D is viewed as a highly challenging inverse problem from the current technological perspective.
A novel generative model that integrates the multiscale properties of U-net with and the generative capabilities of GAN has been proposed.
The model's accuracy is further improved by combining the image regularization loss with the Wasserstein distance loss.
arXiv Detail & Related papers (2024-02-24T13:42:34Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - D-SCo: Dual-Stream Conditional Diffusion for Monocular Hand-Held Object Reconstruction [74.49121940466675]
We introduce centroid-fixed dual-stream conditional diffusion for monocular hand-held object reconstruction.
First, to avoid the object centroid from deviating, we utilize a novel hand-constrained centroid fixing paradigm.
Second, we introduce a dual-stream denoiser to semantically and geometrically model hand-object interactions.
arXiv Detail & Related papers (2023-11-23T20:14:50Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - DiffuseIR:Diffusion Models For Isotropic Reconstruction of 3D
Microscopic Images [20.49786054144047]
We propose DiffuseIR, an unsupervised method for isotropic reconstruction based on diffusion models.
First, we pre-train a diffusion model to learn the structural distribution of biological tissue from lateral microscopic images.
Then we use low-axial-resolution microscopy images to condition the generation process of the diffusion model and generate high-axial-resolution reconstruction results.
arXiv Detail & Related papers (2023-06-21T08:49:28Z) - Deceptive-NeRF/3DGS: Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction [60.52716381465063]
We introduce Deceptive-NeRF/3DGS to enhance sparse-view reconstruction with only a limited set of input images.
Specifically, we propose a deceptive diffusion model turning noisy images rendered from few-view reconstructions into high-quality pseudo-observations.
Our system progressively incorporates diffusion-generated pseudo-observations into the training image sets, ultimately densifying the sparse input observations by 5 to 10 times.
arXiv Detail & Related papers (2023-05-24T14:00:32Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models [33.343489006271255]
Diffusion models have emerged as the new state-of-the-art generative model with high quality samples.
We propose to augment the 2D diffusion prior with a model-based prior in the remaining direction at test time, such that one can achieve coherent reconstructions across all dimensions.
Our method can be run in a single commodity GPU, and establishes the new state-of-the-art.
arXiv Detail & Related papers (2022-11-19T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.